编写简单的 Mapreduce 程序并部署在 Hadoop2.2.0 上运行 经过几天的折腾, 终于配置好了 Hadoop2.2.0( 如何配置在 Linux 平台部署 Hadoop 请参见本博客 在 Fedora 上部署 Hadoop2.2.0 伪分布式平台 ), 今天主要来说说怎么在 Hadoop2.2.0 伪分布式上面运行我们写好的 Mapreduce 程序 先给出这个程序所依赖的 Maven 包 : <dependencies> <artifactid>hadoop-mapreduce-client-core</artifactid> <artifactid>hadoop-common</artifactid> <artifactid>hadoop-mapreduce-client-common</artifactid> <artifactid>hadoop-mapreduce-client-jobclient</artifactid> </dependencies> 记得加上 <artifactid>hadoop-mapreduce-client-common</artifactid> <artifactid>hadoop-mapreduce-client-jobclient</artifactid> 1 / 11
否则运行程序的时候将会出现一下的异常 : Exception in thread "main" java.io.ioexception: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. at org.apache.hadoop.mapreduce.cluster.initialize(cluster.java:120) at org.apache.hadoop.mapreduce.cluster.<init>(cluster.java:82) at org.apache.hadoop.mapreduce.cluster.<init>(cluster.java:75) at org.apache.hadoop.mapred.jobclient.init(jobclient.java:465) at org.apache.hadoop.mapred.jobclient.<init>(jobclient.java:444) at org.apache.hadoop.mapred.jobclient.runjob(jobclient.java:826) at com.wyp.hadoop.maxtemperature.main(maxtemperature.java:41) at sun.reflect.nativemethodaccessorimpl.invoke0(native Method) at sun.reflect.nativemethodaccessorimpl.invoke (NativeMethodAccessorImpl.java:57) at sun.reflect.delegatingmethodaccessorimpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.method.invoke(method.java:606) at com.intellij.rt.execution.application.appmain.main(appmain.java:120) 好了, 现在给出程序, 代码如下 : package com.wyp.hadoop; import org.apache.hadoop.io.intwritable; import org.apache.hadoop.io.longwritable; import org.apache.hadoop.io.text; import org.apache.hadoop.mapred.*; import java.io.ioexception; /** * User: wyp * Date: 13-10-25 * Time: 下午 3:26 * Email:wyphao.2007@163.com */ 2 / 11
public class MaxTemperatureMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text,IntWritable>{ private static final int MISSING = 9999; @Override public void map(longwritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { String line = value.tostring(); String year = line.substring(15, 19); int airtemperature; if(line.charat(87) == '+'){ airtemperature = Integer.parseInt(line.substring(88, 92)); else{ airtemperature = Integer.parseInt(line.substring(87, 92)); String quality = line.substring(92, 93); if(airtemperature!= MISSING && quality.matches("[01459]")){ output.collect(new Text(year), new IntWritable(airTemperature)); package com.wyp.hadoop; import org.apache.hadoop.io.intwritable; import org.apache.hadoop.io.text; import org.apache.hadoop.mapred.mapreducebase; import org.apache.hadoop.mapred.outputcollector; import org.apache.hadoop.mapred.reducer; import org.apache.hadoop.mapred.reporter; import java.io.ioexception; import java.util.iterator; /** * User: wyp * Date: 13-10-25 * Time: 下午 3:36 * Email:wyphao.2007@163.com */ public class MaxTemperatureReducer extends MapReduceBase implements Reducer<Text, IntWritable, 3 / 11
Text, IntWritable> { @Override public void reduce(text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { int maxvalue = Integer.MIN_VALUE; while (values.hasnext()){ maxvalue = Math.max(maxValue, values.next().get()); output.collect(key, new IntWritable(maxValue)); package com.wyp.hadoop; import org.apache.hadoop.fs.path; import org.apache.hadoop.io.intwritable; import org.apache.hadoop.io.text; import org.apache.hadoop.mapred.fileinputformat; import org.apache.hadoop.mapred.fileoutputformat; import org.apache.hadoop.mapred.jobclient; import org.apache.hadoop.mapred.jobconf; import java.io.ioexception; /** * User: wyp * Date: 13-10-25 * Time: 下午 3:40 * Email:wyphao.2007@163.com */ public class MaxTemperature { public static void main(string[] args) throws IOException { if(args.length!= 2){ System.err.println("Error!"); System.exit(1); JobConf conf = new JobConf(MaxTemperature.class); conf.setjobname("max Temperature"); FileInputFormat.addInputPath(conf, new Path(args[0])); FileOutputFormat.setOutputPath(conf, new Path(args[1])); conf.setmapperclass(maxtemperaturemapper.class); 4 / 11
conf.setreducerclass(maxtemperaturereducer.class); conf.setoutputkeyclass(text.class); conf.setoutputvalueclass(intwritable.class); JobClient.runJob(conf); 将上面的程序编译和打包成 jar 文件, 然后开始在 Hadoop2.2.0( 本文假定用户都部署好了 Ha doop2.2.0) 上面部署了 下面主要讲讲如何去部署 : 首先, 启动 Hadoop2.2.0, 命令如下 : [wyp@wyp hadoop]$ sbin/start-dfs.sh [wyp@wyp hadoop]$ sbin/start-yarn.sh 如果你想看看 Hadoop2.2.0 是否运行成功, 运行下面的命令去查看 [wyp@wyp hadoop]$ jps 9582 Main 9684 RemoteMavenServer 16082 Jps 7011 DataNode 7412 ResourceManager 7528 NodeManager 7222 SecondaryNameNode 6832 NameNode 其中 jps 是 jdk 自带的一个命令, 在 jdk/bin 目录下 如果你电脑上面出现了以上的几个进程 ( NameNode SecondaryNameNode NodeManager ResourceManager DataNode 这五个进程必须出现!) 说明你的 Hadoop 服务器启动成功了! 现在来运行上面打包好的 jar 文件 ( 这里为 H adoop.jar, 其中 /home/wyp/ideaprojects/hadoop/out/artifacts/hadoop_jar/hadoop.jar 是它的绝对路径, 不知道绝对路径是什么? 那你好好去学学吧!), 运行下面的命令 : [wyp@wyp Hadoop_jar]$ /home/wyp/downloads/hadoop/bin/hadoop jar \ /home/wyp/ideaprojects/hadoop/out/artifacts/hadoop_jar/hadoop.jar \ com/wyp/hadoop/maxtemperature \ 5 / 11
/user/wyp/data.txt \ /user/wyp/result ( 上面是一条命令, 由于太长了, 所以我分行写, 在实际情况中, 请写一行!) 其中,/ho me/wyp/downloads/hadoop/bin/hadoop 是 hadoop 的绝对路径, 如果你在环境变量中配置好 ha doop 命令的路径就不需要这样写 ;com/wyp/hadoop/maxtemperature 是上面程序的 main 函数的入口 ;/user/wyp/data.txt 是 Hadoop 文件系统 (HDFS) 中的绝对路径 ( 注意 : 这里不是你 Linu x 系统中的绝对路径!), 为需要分析文件的路径 ( 也就是 input);/user/wyp/result 是分析结果输出的绝对路径 ( 注意 : 这里不是你 Linux 系统中的绝对路径! 而是 HDFS 上面的路径! 而且 /user /wyp/result 一定不能存在, 否则会抛出异常! 这是 Hadoop 的保护机制, 你总不想你以前运行好几天的程序突然被你不小心给覆盖掉了吧? 所以, 如果 /user/wyp/result 存在, 程序会抛出异常, 很不错啊 ) 好了 输入上面的命令, 应该会得到下面类似的输出 : 13/10/28 15:20:44 INFO client.rmproxy: Connecting to ResourceManager at /0.0.0.0:8032 13/10/28 15:20:44 INFO client.rmproxy: Connecting to ResourceManager at /0.0.0.0:8032 13/10/28 15:20:45 WARN mapreduce.jobsubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to re medy this. 13/10/28 15:20:45 WARN mapreduce.jobsubmitter: No job jar file set. User classes may not b e found. See Job or Job#setJar(String). 13/10/28 15:20:45 INFO mapred.fileinputformat: Total input paths to process : 1 13/10/28 15:20:46 INFO mapreduce.jobsubmitter: number of splits:2 13/10/28 15:20:46 INFO Configuration.deprecation: user.name is deprecated. Instead, use ma preduce.job.user.name 13/10/28 15:20:46 INFO Configuration.deprecation: mapred.output.value.class is deprecated. I nstead, use mapreduce.job.output.value.class 13/10/28 15:20:46 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name 13/10/28 15:20:46 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, u se mapreduce.input.fileinputformat.inputdir 13/10/28 15:20:46 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 13/10/28 15:20:46 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 13/10/28 15:20:46 INFO Configuration.deprecation: mapred.output.key.class is deprecated. In stead, use mapreduce.job.output.key.class 13/10/28 15:20:46 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir 13/10/28 15:20:46 INFO mapreduce.jobsubmitter: Submitting tokens for job: job_1382942307 976_0008 13/10/28 15:20:47 INFO mapred.yarnrunner: Job jar is not present. Not adding any jar to the list of resources. 13/10/28 15:20:49 INFO impl.yarnclientimpl: Submitted application application_13829423079 6 / 11
76_0008 to ResourceManager at /0.0.0.0:8032 13/10/28 15:20:49 INFO mapreduce.job: The url to track the job: http://wyp:8088/proxy/applic ation_1382942307976_0008/ 13/10/28 15:20:49 INFO mapreduce.job: Running job: job_1382942307976_0008 13/10/28 15:20:59 INFO mapreduce.job: Job job_1382942307976_0008 running in uber mode : false 13/10/28 15:20:59 INFO mapreduce.job: map 0% reduce 0% 13/10/28 15:21:35 INFO mapreduce.job: map 100% reduce 0% 13/10/28 15:21:38 INFO mapreduce.job: map 0% reduce 0% 13/10/28 15:21:38 INFO mapreduce.job: Task Id : attempt_1382942307976_0008_m_000000_0, Status : FAILED Error: java.lang.runtimeexception: Error in configuring object at org.apache.hadoop.util.reflectionutils.setjobconf(reflectionutils.java:109) at org.apache.hadoop.util.reflectionutils.setconf(reflectionutils.java:75) at org.apache.hadoop.util.reflectionutils.newinstance(reflectionutils.java:133) at org.apache.hadoop.mapred.maptask.runoldmapper(maptask.java:425) at org.apache.hadoop.mapred.maptask.run(maptask.java:341) at org.apache.hadoop.mapred.yarnchild$2.run(yarnchild.java:162) at java.security.accesscontroller.doprivileged(native Method) at javax.security.auth.subject.doas(subject.java:415) at org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1491 ) at org.apache.hadoop.mapred.yarnchild.main(yarnchild.java:157) Caused by: java.lang.reflect.invocationtargetexception at sun.reflect.nativemethodaccessorimpl.invoke0(native Method) at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57) at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43) at java.lang.reflect.method.invoke(method.java:606) at org.apache.hadoop.util.reflectionutils.setjobconf(reflectionutils.java:106)... 9 more Caused by: java.lang.runtimeexception: java.lang.runtimeexception: java.lang.classnotfound Exception: Class com.wyp.hadoop.maxtemperaturemapper1 not found at org.apache.hadoop.conf.configuration.getclass(configuration.java:1752) at org.apache.hadoop.mapred.jobconf.getmapperclass(jobconf.java:1058) at org.apache.hadoop.mapred.maprunner.configure(maprunner.java:38)... 14 more Caused by: java.lang.runtimeexception: java.lang.classnotfoundexception: Class com.wyp.ha doop.maxtemperaturemapper1 not found at org.apache.hadoop.conf.configuration.getclass(configuration.java:1720) at org.apache.hadoop.conf.configuration.getclass(configuration.java:1744)... 16 more Caused by: java.lang.classnotfoundexception: Class com.wyp.hadoop.maxtemperaturemapp er1 not found at org.apache.hadoop.conf.configuration.getclassbyname(configuration.java:1626) at org.apache.hadoop.conf.configuration.getclass(configuration.java:1718)... 17 more 7 / 11
Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 程序居然抛出异常 (ClassNotFoundException)! 这是什么回事? 其实我也不太明白!! 在网上 Google 了一下, 找到别人的观点 : 经个人总结, 这通常是由于以下几种原因造成的 : (1) 你编写了一个 java lib, 封装成了 jar, 然后再写了一个 Hadoop 程序, 调用这个 jar 完成 mapper 和 reducer 的编写 (2) 你编写了一个 Hadoop 程序, 期间调用了一个第三方 java lib 之后, 你将自己的 jar 包或者第三方 java 包分发到各个 TaskTracker 的 HADOOP_HOME 目录下, 运行你的 JAVA 程序, 报了以上错误 那怎么解决呢? 一个笨重的方法是, 在运行 Hadoop 作业的时候, 先运行下面的命令 : [wyp@wyp Hadoop_jar]$ export \ HADOOP_CLASSPATH=/home/wyp/IdeaProjects/Hadoop/out/artifacts/Hadoop_jar/ 其中,/home/wyp/IdeaProjects/Hadoop/out/artifacts/Hadoop_jar/ 是上面 Hadoop.jar 文件所在的目录 好了, 现在再运行一下 Hadoop 作业命令 : 有一个比较推荐的方法, 就是在提交作业的时候加上 - libjars 参数, 后面跟着需要的类库的绝对路径 [wyp@wyp Hadoop_jar]$ hadoop jar /home/wyp/ideaprojects/hadoop/out/artifacts/hadoop_j ar/hadoop.jar com/wyp/hadoop/maxtemperature /user/wyp/data.txt /user/wyp/result 13/10/28 15:34:16 INFO client.rmproxy: Connecting to ResourceManager at /0.0.0.0:8032 13/10/28 15:34:16 INFO client.rmproxy: Connecting to ResourceManager at /0.0.0.0:8032 13/10/28 15:34:17 WARN mapreduce.jobsubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to re medy this. 13/10/28 15:34:17 INFO mapred.fileinputformat: Total input paths to process : 1 13/10/28 15:34:17 INFO mapreduce.jobsubmitter: number of splits:2 13/10/28 15:34:17 INFO Configuration.deprecation: user.name is deprecated. Instead, use ma preduce.job.user.name 13/10/28 15:34:17 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use ma preduce.job.jar 8 / 11
13/10/28 15:34:17 INFO Configuration.deprecation: mapred.output.value.class is deprecated. I nstead, use mapreduce.job.output.value.class 13/10/28 15:34:17 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name 13/10/28 15:34:17 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, u se mapreduce.input.fileinputformat.inputdir 13/10/28 15:34:17 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 13/10/28 15:34:17 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 13/10/28 15:34:17 INFO Configuration.deprecation: mapred.output.key.class is deprecated. In stead, use mapreduce.job.output.key.class 13/10/28 15:34:17 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir 13/10/28 15:34:18 INFO mapreduce.jobsubmitter: Submitting tokens for job: job_1382942307 976_0009 13/10/28 15:34:18 INFO impl.yarnclientimpl: Submitted application application_13829423079 76_0009 to ResourceManager at /0.0.0.0:8032 13/10/28 15:34:18 INFO mapreduce.job: The url to track the job: http://wyp:8088/proxy/applic ation_1382942307976_0009/ 13/10/28 15:34:18 INFO mapreduce.job: Running job: job_1382942307976_0009 13/10/28 15:34:26 INFO mapreduce.job: Job job_1382942307976_0009 running in uber mode : false 13/10/28 15:34:26 INFO mapreduce.job: map 0% reduce 0% 13/10/28 15:34:41 INFO mapreduce.job: map 50% reduce 0% 13/10/28 15:34:53 INFO mapreduce.job: map 100% reduce 0% 13/10/28 15:35:17 INFO mapreduce.job: map 100% reduce 100% 13/10/28 15:35:18 INFO mapreduce.job: Job job_1382942307976_0009 completed successfully 13/10/28 15:35:18 INFO mapreduce.job: Counters: 43 File System Counters FILE: Number of bytes read=144425 FILE: Number of bytes written=524725 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=1777598 HDFS: Number of bytes written=18 HDFS: Number of read operations=9 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=2 Launched reduce tasks=1 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=38057 Total time spent by all reduces in occupied slots (ms)=24800 9 / 11
Map-Reduce Framework Map input records=13130 Map output records=13129 Map output bytes=118161 Map output materialized bytes=144431 Input split bytes=182 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=144431 Reduce input records=13129 Reduce output records=2 Spilled Records=26258 Shuffled Maps =2 Failed Shuffles=0 Merged Map outputs=2 GC time elapsed (ms)=321 CPU time spent (ms)=5110 Physical memory (bytes) snapshot=552824832 Virtual memory (bytes) snapshot=1228738560 Total committed heap usage (bytes)=459800576 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=1777416 File Output Format Counters Bytes Written=18 到这里, 程序就成功运行了! 很高兴吧? 那么怎么查看刚刚程序运行的结果呢? 很简单, 运行下面命令 : [wyp@wyp Hadoop_jar]$ hadoop fs -ls /user/wyp Found 2 items -rw-r--r-- 1 wyp supergroup 1777168 2013-10-25 17:44 /user/wyp/data.txt drwxr-xr-x - wyp supergroup 0 2013-10-28 15:35 /user/wyp/result [wyp@wyp Hadoop_jar]$ hadoop fs -ls /user/wyp/result Found 2 items -rw-r--r-- 1 wyp supergroup 0 2013-10-28 15:35 /user/wyp/result/_success 10 / 11
Powered by TCPDF (www.tcpdf.org) 编写简单的 Mapreduce 程序并部署在 Hadoop2.2.0 上运行 -rw-r--r-- 1 wyp supergroup 18 2013-10-28 15:35 /user/wyp/result/part-00000 [wyp@wyp Hadoop_jar]$ hadoop fs -cat /user/wyp/result/part-00000 1901 317 1902 244 到此, 你自己写好的一个 Mapreduce 程序终于成功运行了! 附程序测试的数据的下载地址 :http://pan.baidu.com/s/1isacm 本博客文章除特别声明, 全部都是原创! 禁止个人和公司转载本文 谢谢理解 : 过往记忆 (https://www.iteblog.com/) 本文链接 : () 11 / 11