本文介绍了在hdfs文件上运行mapreduce并将reducer结果存储在hbase表中的示例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
以下代码将解决您的问题
驱动程序
HBaseConfiguration conf = HBaseConfiguration.create();
工作职位=新职位(conf,JOB_NAME);
job.setJarByClass(yourclass.class);
job.setMapperClass(yourMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Intwritable.class);
FileInputFormat.setInputPaths(job,new Path(inputPath));
TableMapReduceUtil.initTableReducerJob(TABLE,
yourReducer.class,job);
job.setReducerClass(yourReducer.class);
job.waitForCompletion(true);
class yourMapper扩展了Mapper< LongWritable,Text,Text,IntWritable> {
// @ overide map()
}
class yourReducer
extends
TableReducer< Text,IntWritable,
ImmutableBytesWritable>
{
// @ override rdeuce()
}
Can somebody give one good example link for mapreduce with Hbase? My requirement is run mapreduce on hdfs file and store reducer output to hbase table. Mapper input will be hdfs file and output will be Text,IntWritable key value pairs. Reducers output will be Put object ie add reducer Iterable IntWritable values and store in hbase table.
Here is the code which will solve your problem
HBaseConfiguration conf = HBaseConfiguration.create();
Job job = new Job(conf,"JOB_NAME");
job.setJarByClass(yourclass.class);
job.setMapperClass(yourMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Intwritable.class);
FileInputFormat.setInputPaths(job, new Path(inputPath));
TableMapReduceUtil.initTableReducerJob(TABLE,
yourReducer.class, job);
job.setReducerClass(yourReducer.class);
job.waitForCompletion(true);
class yourMapper extends Mapper<LongWritable, Text, Text,IntWritable> {
//@overide map()
}
class yourReducer
extends
TableReducer<Text, IntWritable,
ImmutableBytesWritable>
{
//@override rdeuce()
}
这篇关于在hdfs文件上运行mapreduce并将reducer结果存储在hbase表中的示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!