本文介绍了从hadoop 1.0.4升级后,Hadoop 2.2.0 mapreduce作业不运行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已将我的hadoop版本从1.0.4升级到2.2.0。 mapreduce作业运行良好。现在我添加了几乎所有为hadoop 2.2.0提供的jar。它仍然给我这个例外。

I have upgraded my hadoop version from 1.0.4 to 2.2.0. The mapreduce job was running fine earlier. Now i have added almost all jars provided for hadoop 2.2.0. Still it gives me this exception. Let me know where i am doing wrong.

Exception in thread "main" java.lang.NoClassDefFoundError: com/google/protobuf/ServiceException
    at org.apache.hadoop.ipc.ProtobufRpcEngine.<clinit>(ProtobufRpcEngine.java:69)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1659)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1624)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1718)
    at org.apache.hadoop.ipc.RPC.getProtocolEngine(RPC.java:203)
    at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:537)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:328)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:235)
    at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:139)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)

Caused by: java.lang.ClassNotFoundException: com.google.protobuf.ServiceException
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)


预先致谢。

Thanks in advance.

推荐答案


  1. 请检查您的 protobuf 依赖项。 AFAIR Hadoop 2.x需要2.5.x(请检查您的hadoop依赖项),并且您可能因为某些外部组件而仅仅使用过期的。

  2. 只需注意:如果您的工作使用 org.apache.hadoop.mapreduce 它可能与Hadoop 2.x二进制不兼容。在这种情况下你应该重新编译。 旧API org.apache.hadoop.mapred 应该没有这样的问题。但我建议在任何情况下重新编译。

  1. Please check your protobuf dependency. AFAIR Hadoop 2.x needs 2.5.x (please check your hadoop dependencies) and you could just take outdated one because of some external component.
  2. Just for note: if your job uses org.apache.hadoop.mapreduce package it could be binary incompatible with Hadoop 2.x. You should recompile in this case. There should be no such issue with "old" API org.apache.hadoop.mapred. But I'd recommend recompile in any case.

希望这至少有帮助。

这篇关于从hadoop 1.0.4升级后,Hadoop 2.2.0 mapreduce作业不运行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-24 05:19