本文介绍了在hadoop的hdfs中保存json数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有以下Reducer类

I have the following Reducer class

public static class TokenCounterReducer extends Reducer<Text, Text, Text, Text> {
    public void reduce(Text key, Iterable<Text> values, Context context)
            throws IOException, InterruptedException {

        JSONObject jsn = new JSONObject();

        for (Text value : values) {
            String[] vals = value.toString().split("\t");
            String[] targetNodes = vals[0].toString().split(",",-1);
            jsn.put("source",vals[1] );
            jsn.put("target",targetNodes);

        }
        // context.write(key, new Text(sum));
    }
}

通过示例(免责声明:这里的新手),我可以看到一般的输出类型似乎像是键/值存储.

Going thru examples (disclaimer: newbie here), I can see that the general output type seems to be like a key/value store.

但是,如果我在输出中没有任何键怎么办?还是如果我的输出是其他格式(在我的情况下为json),该怎么办?

But what if I dont have any key in the output? or what if I want if my output is in some other format (json in my case)?

无论如何,从上面的代码中:我想将json对象写入HDFS吗?

Anyways, from the above code:I want to write json object to HDFS?

在Hadoop流中这是非常琐碎的..但是如何在Hadoop Java中做到这一点?

It was very trivial in Hadoop streaming.. but how do i do it in Hadoop java?

推荐答案

如果您只想向HDFS写入JSON对象列表,而无需关心键/值的概念,则可以在您的代码中使用NullWritable Reducer输出值:

If you just want to write a list of JSON objects to HDFS without caring about the notion of key/value, you could just use a NullWritable in your Reducer output value:

public static class TokenCounterReducer extends Reducer<Text, Text, Text, NullWritable> {
    public void reduce(Text key, Iterable<Text> values, Context context)
            throws IOException, InterruptedException {
        for (Text value : values) {
            JSONObject jsn = new JSONObject();
            ....
            context.write(new Text(jsn.toString()), null);
        }
    }
}

请注意,您需要更改作业配置才能做到:

Note that you will need to change your job configuration to do:

job.setOutputValueClass(NullWritable.class);

通过将JSON对象写入HDFS,我了解到您想存储JSON的String表示形式,上面已经对此进行了描述.如果要将JSON的二进制表示形式存储到HDFS中,则需要使用SequenceFile.显然,您可以为此编写自己的Writable,但是如果您打算使用简单的String表示,我觉得这样会更容易.

By writing your JSON object to HDFS I understood that you want to store a String representation of your JSON which I'm describing above. If you wanted to store a binary representation of your JSON into HDFS you would need to use a SequenceFile. Obviously you could write your own Writable for this but I feel it's just easier like this if you intend to have a simple String representation.

这篇关于在hadoop的hdfs中保存json数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-10 05:10