-
安装Eclipse
下载Eclipse(点击进入下载),解压安装。我安装在/usr/local/software/目录下。 -
在eclipse上安装hadoop插件
下载hadoop插件(点击进入下载) 把插件放到eclipse/plugins目录下。
-
重启eclipse,配置hadoop installation directory
如果安装插件成功,打开Window–>Preferens,你会发现Hadoop Map/Reduce选项,在这个选项里你需要配置Hadoop installation directory。配置完成后退出。
-
配置Map/Reduce Locations
在Window–>Show View中打开Map/Reduce Locations。
在Map/Reduce Locations中新建一个Hadoop Location。在这个View中,右键–>New Hadoop Location。在弹出的对话框中你需要配置Location name,如Hadoop1.0,还有Map/Reduce Master和DFS Master。这里面的Host、Port分别为你在mapred-site.xml、core-site.xml中配置的地址及端口。如:
Map/Reduce Master192.168.239.130 9001
DFS Master
192.168.239.130 9000
配置完后退出。点击DFS Locations–>Hadoop如果能显示文件夹(2)说明配置正确,如果显示”拒绝连接”,请检查你的配置。
-
新建WordCount项目
File—>Project,选择Map/Reduce Project,输入项目名称WordCount等。
在WordCount项目里新建class,名称为WordCount,代码如下:package WordCount; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; public class WordCount extends Configured implements Tool{ /** * * @author root * */ public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); }// while }// map }// mapper /** * * @author root * */ public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); }//for result.set(sum); context.write(key, result); }// reduce }// reducer /** * * @param args * @return * @throws Exception */ public int run(String[] args) throws Exception{ Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } // job name Job job = new Job(conf, "word count"); // class job.setJarByClass(WordCount.class); // mapper job.setMapperClass(TokenizerMapper.class); // combiner job.setCombinerClass(IntSumReducer.class); // reducer job.setReducerClass(IntSumReducer.class); // output key format job.setOutputKeyClass(Text.class); // outout value format job.setOutputValueClass(IntWritable.class); // input path FileInputFormat.addInputPath(job, new Path(otherArgs[0])); // output path FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); job.waitForCompletion(true); return job.isSuccessful() ? 0: 1; } /** * * @param args * @throws Exception */ public static void main(String[] args) throws Exception { int res = ToolRunner.run(new Configuration(), new WordCount(), args); System.exit(res); } }
-
配置运行参数
在弹出的Run Configurations对话框中,点Java Application,右键–>New,这时会新建一个application名为WordCount 配置运行参数,点Arguments,在Program arguments中输入“你要传给程序的输入文件夹和你要求程序将计算结果保存的文件夹”。
-
点击Run,运行程序
点击Run,运行程序,过段时间将运行完成,等运行结束后,查看运行结果,使用命令: hadoop dfs -ls wordcountOutput查看例子的输出结果,发现有两个文件夹和一个文件,使用命令查看part-r-00000文件, hadoop dfs -cat wordcountOutput/part-r-00000可以查看运行结果。
也可以从Eclipse上查看运行结果: