本文介绍了与Cloudera Hbase 1.0.0集成时的依赖冲突的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图将我的play framework(2.4.2)web应用程序连接到cloudera hbase群集。我在我的bulid.sbt文件中包含了hbase依赖项,并使用hbase示例代码将一个单元格插入表中。但是,我得到了这个异常,这似乎是玩框架和Hbase之间的依赖冲突。
我也附加了我的示例代码和build.sbt文件。

  [错误] [07/21/2015 12:03:05.919 ] [application-akka.actor.default-dispatcher-5] [ActorSystem(application)]从线程[application-akka.actor.default-dispatcher-5]中取消致命错误,关闭ActorSystem [应用程序] 
java。 lang.IllegalAccessError:尝试从org.apache.hadoop.hbase类的org.apache.hadoop.hbase.zookeeper.MetaTableLocator
访问方法com.google.common.base.Stopwatch。< init>()V .zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:434)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60)
at org.apache.hadoop.hbase .client.ConnectionManager $ HConnectionImplementation.locateRegion(ConnectionManager.java:1123)
at org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation.locateRegion(ConnectionManager.java:1110)
at org .apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1262)
at org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation.locateRegion(ConnectionManager.java:1126)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1496)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1107)
at controllers.Application.index(Application.java:44)
at router.Routes $$ anonfun $ routes $ 1 $$ anonfun $ applyOrElse $ 1 $$ anonfun $ apply $ 1.apply(Routes.scala:95)
在router.Routes $$ anonfun $ routes $ 1 $$ anonfun $ applyOrElse $ 1 $$ anonfun $ apply $ 1.apply(Routes.scala:95)
at play.core.routing.HandlerInvokerFactory $$ anon $ 4.resultCall( HandlerInvoker.scala:136)
at play.core.routing.HandlerInvokerFactory $ JavaActionInvokerFactory $$ anon $ 14 $$ anon $ 3 $$ anon $ 1.invocation(HandlerInvoker.scala:127)
at play.core。 j.JavaAction $$ anon $ 1.call(JavaAction.scala:70)
at play.http.DefaultHttpRequestHandler $ 1.call(DefaultHttpRequestHandler.java:20)
at play.core.j.JavaAction $$ anonfun $ 7.apply(JavaAction.scala:94)
at play.core.j.JavaAction $$ anonfun $ 7.apply(JavaAction.scala:94)
at scala.concurrent.impl.Future $ PromiseCompletingRunnable .liftedTree1 $ 1(Future.scala:24)
at scala.concurrent.impl.Future $ PromiseCompletingRunnable.run(Future.scala:24)
at play.core.j.HttpExecutionContext $$ anon $ 2。运行(HttpExecutionContext.scala:40)
在play.api.libs.iteratee.Exec ution $ trampoline $ .execute(Execution.scala:70)
at play.core.j.HttpExecutionContext.execute(HttpExecutionContext.scala:32)
at scala.concurrent.impl.Future $ .apply( Future.scala:31)
at scala.concurrent.Future $ .apply(Future.scala:492)
at play.core.j.JavaAction.apply(JavaAction.scala:94)
at play.api.mvc.Action $$ anonfun $ apply $ 1 $$ anonfun $ apply $ 4 $$ anonfun $ apply $ 5.apply(Action.scala:105)
at play.api.mvc.Action $$ anonfun $ apply $ 1 $$ anonfun $ apply $ 4 $$ anonfun $ apply $ 5.apply(Action.scala:105)
at play.utils.Threads $ .withContextClassLoader(Threads.scala:21)
at play.api.mvc.Action $$ anonfun $ apply $ 1 $$ anonfun $ apply $ 4.apply(Action.scala:104)
at play.api.mvc.Action $$ anonfun $ apply $ 1 $$ anonfun $应用$ 4.apply(Action.scala:103)
at scala.Option.map(Option.scala:146)
at play.api.mvc.Action $$ anonfun $ apply $ 1.apply(Action .scala:103)
在play.api.mvc.Action $$ anonfun $ apply $ 1.apply(Action.scala:96)
在play.api.libs.iteratee.Iteratee $$ anonfun $ mapM $ 1.apply(Iteratee.scala:524)
在play.api.libs。 iteratee.Iteratee $$ anonfun $ mapM $ 1.apply(Iteratee.scala:524)
at play.api.libs.iteratee.Iteratee $$ anonfun $ flatMapM $ 1.apply(Iteratee.scala:560)
at play.api.libs.iteratee.Iteratee $$ anonfun $ flatMapM $ 1.apply(Iteratee.scala:560)
at play.api.libs.iteratee.Iteratee $$ anonfun $ flatMap $ 1 $$ anonfun $申请$ 13.apply(Iteratee.scala:536)
在play.api.libs.iteratee.Iteratee $$ anonfun $ flatMap $ 1 $$ anonfun $ apply $ 13.apply(Iteratee.scala:536)
at scala.concurrent.impl.Future $ PromiseCompletingRunnable.liftedTree1 $ 1(Future.scala:24)
at scala.concurrent.impl.Future $ PromiseCompletingRunnable.run(Future.scala:24)
at akka。 dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator $ AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool $ WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent .forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

这是我的bulid.sbt文件:

  name:= HbaseTest

version:=1.0-SNAPSHOT

lazy val root =(项目文件(。))。enablePlugins(PlayJava)

scalaVersion:=2.11.6

libraryDependencies ++ = Seq(
javaJdbc,
cache,
javaWs

// hbase
libraryDependencies + =org.apache.hbase%hbase-client%1.0.0-cdh5.4.4
libraryDependencies + =org.apache.hbase %hbase-annotations%1.0.0-cdh5.4.4
libraryDependencies + =org.apache.hbase%hbase-common%1.0.0-cdh5.4.4
libraryDepe ndencies =org.apache.hbase%hbase-protocol%1.0.0-cdh5.4.4
// hadoop
libraryDependencies + =org.apache.hadoop%hadoop -common%2.6.0-cdh5.4.4
libraryDependencies + =org.apache.hadoop%hadoop-annotations%2.6.0-cdh5.4.4
libraryDependencies + = org.apache.hadoop%hadoop-auth%2.6.0-cdh5.4.4
// Play提供了两种类型的路由器,一种预计它的动作被注入,
/ / other,传统风格,静态访问其动作。
routesGenerator:= InjectedRoutesGenerator

这是我的代码:

 程序包控制器; 

导入播放。*;
import play.mvc。*;
导入views.html。*;

import java.io.IOException;
import java.util.HashMap;

导入org.apache.hadoop.conf.Configuration;
导入org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
import org.apache.hadoop.hbase.util.Bytes;
public class Application extends Controller {

public Result index(){
String ZooKeeperIP =10.12.7.43;
String ZooKeeperPort =2181;
String HBaseMaster =10.12.7.43:60000;
配置hBaseConfig;
连接连接=空;
// TableName TABLE_NAME =sample;
hBaseConfig = HBaseConfiguration.create();
hBaseConfig.set(hbase.zookeeper.quorum,ZooKeeperIP);
hBaseConfig.set(hbase.zookeeper.property.clientPort,ZooKeeperPort);
hBaseConfig.set(hbase.master,HBaseMaster);


//连接= ConnectionFactory.createConnection(hBaseConfig);

尝试{
connection = ConnectionFactory.createConnection(hBaseConfig);
HTable表=新的HTable(hBaseConfig,sample);
Put p = new Put(Bytes.toBytes(1));
p.add(Bytes.toBytes(a),Bytes.toBytes(b),Bytes.toBytes(4));
table.put(p);
} catch(Exception e){
e.printStackTrace();
System.out.println(e.getMessage());
}
return ok(index.render(Your new application is ready。));
}

}


解决方案

正如我所见,麻烦与依赖关系有关。

特别是 guava 库(这是 hadoop )。

Play 使用较新版本的 guava 我所看到的。它没有 StopWatch 类,其中 hbase 需要。



您可以通过多种方式来解决问题(不幸的是,我知道他们都是'hacky')。

简单的方法是使用像。我们将自己添加 StopWatch 的位置。


另一种方式是以某种方式分离HBase操作。 (这将需要大量的工作和设计变更)

如果 sbt 支持'阴影会更容易',因为我知道它还没有。

您仍然可以使用 sbt 来解决它的一些问题,比如处理类似问题。


I tried to connect my play framework (2.4.2) web application to a cloudera hbase cluster. I included hbase dependencies in my bulid.sbt file and used hbase sample code to insert a cell into a table. However, I got this exception which seems to be dependency conflict between play framework and Hbase.I also attached my sample code and build.sbt files as well. I would be grateful for your help to resolve this error.

    [ERROR] [07/21/2015 12:03:05.919] [application-akka.actor.default-dispatcher-5] [ActorSystem(application)] Uncaught fatal error from thread [application-akka.actor.default-dispatcher-5] shutting down ActorSystem [application]
    java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator
        at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:434)
        at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1123)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1110)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1262)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1126)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1496)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1107)
        at controllers.Application.index(Application.java:44)
        at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.apply(Routes.scala:95)
        at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.apply(Routes.scala:95)
        at play.core.routing.HandlerInvokerFactory$$anon$4.resultCall(HandlerInvoker.scala:136)
        at play.core.routing.HandlerInvokerFactory$JavaActionInvokerFactory$$anon$14$$anon$3$$anon$1.invocation(HandlerInvoker.scala:127)
        at play.core.j.JavaAction$$anon$1.call(JavaAction.scala:70)
        at play.http.DefaultHttpRequestHandler$1.call(DefaultHttpRequestHandler.java:20)
        at play.core.j.JavaAction$$anonfun$7.apply(JavaAction.scala:94)
        at play.core.j.JavaAction$$anonfun$7.apply(JavaAction.scala:94)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
        at play.core.j.HttpExecutionContext$$anon$2.run(HttpExecutionContext.scala:40)
        at play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:70)
        at play.core.j.HttpExecutionContext.execute(HttpExecutionContext.scala:32)
        at scala.concurrent.impl.Future$.apply(Future.scala:31)
        at scala.concurrent.Future$.apply(Future.scala:492)
        at play.core.j.JavaAction.apply(JavaAction.scala:94)
        at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:105)
        at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:105)
        at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
        at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:104)
        at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:103)
        at scala.Option.map(Option.scala:146)
        at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:103)
        at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:96)
        at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:524)
        at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:524)
        at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:560)
        at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:560)
        at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:536)
        at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:536)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

this is my bulid.sbt file:

name := """HbaseTest"""

version := "1.0-SNAPSHOT"

lazy val root = (project in file(".")).enablePlugins(PlayJava)

scalaVersion := "2.11.6"

libraryDependencies ++= Seq(
  javaJdbc,
  cache,
  javaWs
)
//hbase
libraryDependencies +="org.apache.hbase" % "hbase-client" % "1.0.0-cdh5.4.4"
libraryDependencies +="org.apache.hbase" % "hbase-annotations" % "1.0.0-cdh5.4.4"
libraryDependencies +="org.apache.hbase" % "hbase-common" % "1.0.0-cdh5.4.4"
libraryDependencies +="org.apache.hbase" % "hbase-protocol" % "1.0.0-cdh5.4.4"
//hadoop
libraryDependencies +="org.apache.hadoop" % "hadoop-common"%"2.6.0-cdh5.4.4"
libraryDependencies +="org.apache.hadoop" % "hadoop-annotations"%"2.6.0-cdh5.4.4"
libraryDependencies +="org.apache.hadoop" % "hadoop-auth"%"2.6.0-cdh5.4.4"
// Play provides two styles of routers, one expects its actions to be injected, the
// other, legacy style, accesses its actions statically.
routesGenerator := InjectedRoutesGenerator

this is my code:

package controllers;

import play.*;
import play.mvc.*;
import views.html.*;

import java.io.IOException;
import java.util.HashMap;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
import org.apache.hadoop.hbase.util.Bytes;
public class Application extends Controller {

    public Result index() {
          String ZooKeeperIP = "10.12.7.43";
          String ZooKeeperPort = "2181";
          String HBaseMaster = "10.12.7.43:60000";
          Configuration hBaseConfig;
          Connection  connection = null;
          //TableName TABLE_NAME = "sample";
          hBaseConfig =  HBaseConfiguration.create();
            hBaseConfig.set("hbase.zookeeper.quorum",ZooKeeperIP);
            hBaseConfig.set("hbase.zookeeper.property.clientPort", ZooKeeperPort);
            hBaseConfig.set("hbase.master", HBaseMaster);


            //connection = ConnectionFactory.createConnection(hBaseConfig);

            try {
                connection = ConnectionFactory.createConnection(hBaseConfig);
                HTable table = new HTable(hBaseConfig, "sample");
                Put p = new Put(Bytes.toBytes("1"));
                p.add(Bytes.toBytes("a"), Bytes.toBytes("b"), Bytes.toBytes("4"));
                table.put(p);
            }catch (Exception e) {
                e.printStackTrace();
                System.out.println(e.getMessage());
            }
        return ok(index.render("Your new application is ready."));
    }

}
解决方案

As I can see, The trouble is with the dependencies.
Specifically guava library (which is a common problem with hadoop).
Play uses newer version of guava as I can see. It doesn't have the StopWatch class which hbase requires.

You could approach the problem this is in multiple ways (All of them I know are 'hacky' unfortunately).

Easy way is to use a hack like zipkin. Where we would add the StopWatch ourselves.

Another way would be to somehow separate HBase operations. (Which would require lot of work and design changes)

It would have been much easier if sbt supported 'shading', As I know it doesn't yet.
You could still workaround it using sbt with some effort like how spark deals with a similar problem.

这篇关于与Cloudera Hbase 1.0.0集成时的依赖冲突的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-11 06:50