CHAPTER 使用 Hadoop 打造自己的雲 8 8.3 測試 Hadoop 雲端系統 4 Nodes Hadoop Map Reduce Hadoop WordCount 4 Nodes Hadoop Map/Reduce $HADOOP_HOME /home/ hadoop/hadoop-0.20.2 wordcount echo $ mkdir wordcount $ cd wordcount $ echo "Hello World Bye World" >> inputfile1 $ echo "Hello Hadoop Goodbye Hadoop" >> inputfile2 $HADOOP_HOME/bin/hadoop HDFS Map/Reduce $ bin/hadoop dfs -put./wordcount input hadoop-0.20.2-example.jar wordcount wordcount Hadoop NameNode JobTracker hdp0 Map/Reduce DataNodes TaskTracker Hadoop $ bin/hadoop jar hadoop-0.20.2-examples.jar wordcount input output 10/04/15 11:55:44 INFO input.fileinputformat: Total input paths to process : 2 10/04/15 11:55:45 INFO Running job: job_201004150649_0001 8-37
雲端 Cloud Computing 技術指南 運算 應用 平台與架構 10/04/15 11:55:46 INFO 10/04/15 11:55:53 INFO 10/04/15 11:55:56 INFO 10/04/15 11:56:05 INFO 10/04/15 11:56:07 INFO job_201004150649_0001 10/04/15 11:56:07 INFO map map map map Job 0% reduce 0% 50% reduce 0% 100% reduce 0% 100% reduce 100% complete: Counters: 18 執 行 指 令 之 後 你 可 以 清 楚 看 到 Hadoop 雲 端 系 統 正 在 進 行 Map 與 Reduce 的工作進度百分比 另一方面 你也可以透過網頁瀏覽器瀏覽 http:// hdp0:50030 查看目前雲端系統正在進行 Map 與 Reduce 的 Jobs 工作有哪些以 及目前進度 如圖 8-24 所示 圖 8-24 檢視 Hadoop 雲端系統的 Map 與 Reduce 工作 8-38 XP10077.indb 38 2010/6/10 下午 03:10:24
CHAPTER 使用 Hadoop 打造自己的雲 8 Map/Reduce HDFS output Hadoop $ bin/hadoop dfs -get output output $ cd output/ $ cat part-r-0000 Bye 1 GoodBye 1 Hadoop 2 Hello 2 World 2 Hadoop Hadoop MapReduce pi Hadoop $ bin/hadoop jar hadoop-0.20.2-examples.jar pi 4 200000 ( ) Map Reduce Hadoop Hadoop 8-39
雲端運算 Cloud Computing 技術指南 應用 平台與架構 8.4 讓 Hadoop 執行複雜的 MapReduce 運算 Hadoop MapReduce Hadoop Hadoop Mapper Reduer Map Reduce <k1, v1> ( ) map <k2, v2> ( ) reduce <k3, v3> ( ) Mapper input <k1, v1> Key/Value intermediate <k2, v2> Hadoop Key/Value Job Hadoop Node Map 10-100 10TB Hadoop Task blocksize 128MB 82000 Map Reduce intermediate <k2, v2> <k3, v3> Reduce Shuffle Sort Reduce Shuffle HTTP Map Map <k2, v2> Reduce Sort Map <k2, v2> k2 Hadoop Shuffle Sort Reduce <k3, v3> Hadoop Reduce 0.95 1.75 Number of Reduces = Total Nodes * 0.95-1.75 Hadoop Reduce 8-40
CHAPTER 使用 Hadoop 打造自己的雲 8 Hadoop Map Reduce Linux Java Virtual Machine JVM Hadoop hadoop job Map/Reduce $ bin/hadoop job -history output ( Map/Reduce ) Task Summary =================================================================== Kind Total Successful Failed Killed StartTime FinishTime Setup 1 1 0 0 16-Apr-2010 12:03:54 16-Apr-2010 12:03:54 (0sec) Map 2 2 0 0 16-Apr-2010 12:03:55 16-Apr-2010 12:04:00 (5sec) Reduce 1 1 0 0 16-Apr-2010 12:04:02 16-Apr-2010 12:04:12 (9sec) Cleanup 1 1 0 0 16-Apr-2010 12:04:14 16-Apr-2010 12:04:15 (0sec) =================================================================== hadoop job Map Reduce Map/Reduce all Map/Reduce $ bin/hadoop job -history all output ( Map/Reduce ) Map/Reduce Job Control -put MapReduce Hadoop -rmr Hadoop Hadoop $ bin/hadoop dfs -put myjob input ( Hadoop) $ bin/hadoop dfs -rmr input ( Hadoop ) $ bin/hadoop dfs -rmr output ( Hadoop ) MapReduce Hadoop Map/Reduce Hadoop WordCount MapReduce 8-41
雲端運算 Cloud Computing 技術指南 應用 平台與架構 Hadoop HDFS wordcount2 Hadoop $ bin/hadoop dfs -mkdir wordcount2 ( HDFS wordcount2) $ cd wordcount2 $ echo "Hello World, Bye World?" >> input1 $ echo "Hello Hadoop, Goodbye to hadoop." >> input2 Hadoop WordCount Map/Reduce wordcount MapReduce $ bin/hadoop dfs -put./wordcount2 input2 $ bin/hadoop jar hadoop-0.20.2-examples.jar wordcount input2 output2 WordCount MapReduce output2 Key/Value $ bin/hadoop dfs -get output2 output2 $ cd output2/ $ cat part-r-0000 Bye 1 Goodbye 1 Hadoop, 1 Hello 2 World, 1 World? 1 hadoop. 1 to 1 MapReduce Hadoop MapReduce 8-42
CHAPTER 使用 Hadoop 打造自己的雲 8 MapReduce MapReduce 8.5 開發 MapReduce 雲端運算程式 Hadoop MapReduce Hadoop WordCount MapReduce /home/hadoop/hadoop-0.20.2 WordCount.java $ cd /home/hadoop/hadoop-0.20.2/ $ gedit WordCount.java // WordCount.java package org.myorg; import java.io.*; import java.util.*; import org.apache.hadoop.fs.path; import org.apache.hadoop.filecache.distributedcache; import org.apache.hadoop.conf.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; import org.apache.hadoop.util.*; public class WordCount extends Configured implements Tool { public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { 8-43
雲端運算 Cloud Computing 技術指南 應用 平台與架構 static enum Counters { INPUT_WORDS private final static IntWritable one = new IntWritable(1); private Text word = new Text(); private boolean casesensitive = true; private Set<String> patternstoskip = new HashSet<String>(); private long numrecords = 0; private String inputfile; public void configure(jobconf job) { casesensitive = job.getboolean("wordcount.case.sensitive", true); inputfile = job.get("map.input.file"); if (job.getboolean("wordcount.skip.patterns", false)) { Path patternsfiles = new Path 0 ; try { patternsfiles = DistributedCache.getLocalCacheFiles(job); catch (IOException ioe) { System.err.println("Caught exception while getting cached files: " + StringUtils.stringifyException(ioe)); for (Path patternsfile : patternsfiles) { parseskipfile(patternsfile); private void parseskipfile(path patternsfile) { try { BufferedReader fis = new BufferedReader(new FileReader(patternsFile.toString())); String pattern = null; while ((pattern = fis.readline())!= null) { patternstoskip.add(pattern); catch (IOException ioe) { 8-44