BigTable中的数据时出错

BigTable中的数据时出错

本文介绍了导出Google Cloud BigTable中的数据时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在浏览时,我是在最终导出命令(从具有适当env变量集的主实例执行)中获取以下堆栈跟踪。



$ {HADOOP_HOME} / bin / hadoop jar $ {HADOOP_BIGTABLE_JAR} export-table -libjars $ {HADOOP_BIGTABLE_JAR}< table-name> < gs:// bucket>

  SLF4J:类路径包含多个SLF4J绑定。 
SLF4J:在[jar:file:/home/hadoop/hbase-install/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]中找到绑定
SLF4J:在[jar:file:/home/hadoop/hadoop-install/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]中找到绑定
SLF4J:有关说明,请参阅http://www.slf4j.org/codes.html#multiple_bindings。
SLF4J:实际绑定类型为[org.slf4j.impl.Log4jLoggerFactory] ​​
2016-02-08 23:39:39,068 INFO [main] mapreduce.Export:versions = 1,starttime = 0, endtime = 9223372036854775807,keepDeletedCells = false
2016-02-08 23:39:39,213 INFO [main] gcs.GoogleHadoopFileSystemBase:GHFS版本:1.4.4-hadoop2
java.lang.IllegalAccessError:试图访问来自sun.security.ssl.ClientHandshaker的sun.security.ssl.Handshaker.localSupportedSignAlgs字段sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278)
sun.security.ssl .Handshaker.processLoop(Handshaker.java:913)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:849)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java :1035)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
at sun.security.ssl.SSLSocketImpl。 startHandshake(SSLSocketImpl.java:1355)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection。 connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
在com.google.api.client.http.javanet。 NetHttpRequest.execute(NetHttpRequest.java:93)
在com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
在com.google.api.client.googleapis。 services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
在com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
在com.google.api。 client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
在com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getB ucket(GoogleCloudStorageImpl.java:1599)
在com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getItemInfo(GoogleCloudStorageImpl.java:1554)
在com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage。 getItemInfo(CacheSupplementedGoogleCloudStorage.java:547)
在com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1042)
在com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem。存在(GoogleCloudStorageFileSystem.java:383)
在com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configureBuckets(GoogleHadoopFileSystemBase.java:1650)
在com.google.cloud.hadoop.fs。 gcs.GoogleHadoopFileSystem.configureBuckets(GoogleHadoopFileSystem.java:71)
在com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1598)
在com.google.cloud。 hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:78 3)
在com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746)
在org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java: 2591)
在org.apache.hadoop.fs.FileSystem.access $ 200(FileSystem.java:89)
在org.apache.hadoop.fs.FileSystem $ Cache.getInternal(FileSystem.java:2625 )
在org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:2607)
在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
在org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
在org.apache.hadoop.hbase.util.DynamicClassLoader。< init>(DynamicClassLoader.java:104)
在org.apache.hadoop.hbase.protobuf.ProtobufUtil。< clinit>(ProtobufUtil.java:241)
在org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(Tabl eMapReduceUtil.java:509)
在org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:207)
在org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob( TableMapReduceUtil.java:168)
在org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:291)
在org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob( TableMapReduceUtil.java:92)
在org.apache.hadoop.hbase.mapreduce.IdentityTableMapper.initJob(IdentityTableMapper.java:51)
在org.apache.hadoop.hbase.mapreduce.Export.createSubmittableJob( Export.java:75)
在org.apache.hadoop.hbase.mapreduce.Export.main(Export.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43)
在java.lang.reflect.Method.invoke(Method.java:606)
在org.apache.hadoop.util.ProgramDriver $ ProgramDescription.invoke(ProgramDriver.java:72)
在org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
在org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:153)
at com .google.cloud.bigtable.mapreduce.Driver.main(Driver.java:35)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl。 java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(Method.java:606)
在org .apache.hadoop.util.RunJar.main(RunJar.java:212)

这是我的ENV var设置,以防万一有帮助:

 导出HBASE_HOME = / home / hadoop / hbase-install 
导出HADOOP_CLASSPATH = $ {HBASE_HOME} / bin / hbase classpat h`
export HADOOP_HOME = / home / hadoop / hadoop-install

export HADOOP_CLIENT_OPTS = - Xbootclasspath / p:$ {HBASE_HOME} /lib/bigtable/alpn-boot-7.1.3 .v20150130.jar
export HADOOP_BIGTABLE_JAR = $ {HBASE_HOME} /lib/bigtable/bigtable-hbase-mapreduce-0.2.2-shaded.jar
export HADOOP_HBASE_JAR = $ {HBASE_HOME} / lib / hbase- server-1.1.2.jar

另外,当我尝试运行 hbase shell 然后 list 表只是挂起,并没有提取表的列表。这样会发生什么:

 〜$ hbase shell 
SLF4J:类路径包含多个SLF4J绑定。
SLF4J:在[jar:file:/home/hadoop/hbase-install/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]中找到绑定
SLF4J:在[jar:file:/home/hadoop/hadoop-install/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]中找到绑定
SLF4J:有关说明,请参阅http://www.slf4j.org/codes.html#multiple_bindings。
SLF4J:实际绑定类型为[org.slf4j.impl.Log4jLoggerFactory] ​​
2016-02-09 00:02:01,334 INFO [main] grpc.BigtableSession:打开projectId的连接mystical-height- 89421,zoneId us-central1-b,clusterId twitter-data,数据主机bigtable.googleapis.com,表admin主机bigtabletableadmin.googleapis.com。
2016-02-09 00:02:01,358 INFO [BigtableSession-startup-0] grpc.BigtableSession:gRPC正在使用JDK提供程序(alpn-boot jar)
2016-02-09 00:02 :01,648 INFO [bigtable-connection-shared-executor-pool1-t2] io.RefreshingOAuth2CredentialsInterceptor:刷新OAuth令牌
HBase Shell;输入help< RETURN>作为受支持命令的列表。
键入exit< RETURN>离开HBase Shell
版本1.1.2,rcc2b70cf03e3378800661ec5cab11eb43fafe0fc,Wed Aug 26 20:11:27 PDT 2015

hbase(main):001:0>列表
TABLE

我试过:




  • 双重检查ALPN和ENV变量是否正确设置

  • 双重检查hbase-site.xml和hbase-env.sh以确保没有任何外观



我甚至尝试连接到我的群集(就像我以前能够遵循指示),但似乎我似乎无法让它现在工作...(它也挂起)

  user @ gcloud-instance:hbase-1.1.2 $ bin / hbase shell 
2016-02-09 00:07:03,506 WARN [main] util.NativeCodeLoader:无法为您的平台加载native-hadoop库...在适用的情况下使用内置java类
2016-02-09 00 :07:03,913 INFO [main] grpc.BigtableSession:打开projectId< project>,zoneId us-central1-b,clusterId< cluster> ;,数据的连接主机bigtable.googleapis.com,表管理员主机bigtabletableadmin.googleapis.com。
2016-02-09 00:07:04,039 INFO [BigtableSession-startup-0] grpc.BigtableSession:gRPC正在使用JDK提供程序(alpn-boot jar)
2016-02-09 00:07 :05,138 INFO [Credentials-Refresh-0] io.RefreshingOAuth2CredentialsInterceptor:刷新OAuth令牌
HBase Shell;输入help< RETURN>作为受支持命令的列表。
键入exit< RETURN>离开HBase Shell
版本1.1.2,rcc2b70cf03e3378800661ec5cab11eb43fafe0fc,Wed Aug 26 20:11:27 PDT 2015

hbase(main):001:0>列表
TABLE
Feb 09,2016 12:07:08 AM com.google.bigtable.repackaged.io.grpc.internal.TransportSet $ 1 run
INFO:创建的com.google.bigtable传输.repackaged.io.grpc.netty.NettyClientTransport @ 7b480442(bigtabletableadmin.googleapis.com/64.233.183.219:443)for bigtabletableadmin.googleapis.com/64.233.183.219:443

任何想法与我做错了什么?看起来像一个访问问题 - 如何解决?



谢谢!

解决方案

  1. 您可以在。


  2. ssh ./ cluster.sh ssh


  3. hbase shell 以确认所有都是有序的。


  4. hadoop jar $ {HADOOP_BIGTABLE_JAR} export-table -libjars $ {HADOOP_BIGTABLE_JAR}< ;表名称> gs://< bucket> / some-folder


  5. gsutil ls gs://< ; bucket> / some-folder / ** ,看是否存在 _SUCCESS 。如果是这样,剩下的文件就是你的数据。


  6. 退出


  7. ./ cluster.sh delete 以摆脱集群,如果不再需要它。 >


您遇到了每周Java运行时更新的问题,已更正。


While going through the Google docs, I'm getting the below stack trace on the final export command (executed from the master instance with appropriate env variables set).

${HADOOP_HOME}/bin/hadoop jar ${HADOOP_BIGTABLE_JAR} export-table -libjars ${HADOOP_BIGTABLE_JAR} <table-name> <gs://bucket>

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-install/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-install/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-02-08 23:39:39,068 INFO  [main] mapreduce.Export: versions=1, starttime=0, endtime=9223372036854775807, keepDeletedCells=false
2016-02-08 23:39:39,213 INFO  [main] gcs.GoogleHadoopFileSystemBase: GHFS version: 1.4.4-hadoop2
java.lang.IllegalAccessError: tried to access field sun.security.ssl.Handshaker.localSupportedSignAlgs from class sun.security.ssl.ClientHandshaker
    at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278)
    at sun.security.ssl.Handshaker.processLoop(Handshaker.java:913)
    at sun.security.ssl.Handshaker.process_record(Handshaker.java:849)
    at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1035)
    at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344)
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1355)
    at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
    at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
    at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
    at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
    at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
    at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
    at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
    at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
    at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getBucket(GoogleCloudStorageImpl.java:1599)
    at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getItemInfo(GoogleCloudStorageImpl.java:1554)
    at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.getItemInfo(CacheSupplementedGoogleCloudStorage.java:547)
    at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1042)
    at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.exists(GoogleCloudStorageFileSystem.java:383)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configureBuckets(GoogleHadoopFileSystemBase.java:1650)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.configureBuckets(GoogleHadoopFileSystem.java:71)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1598)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241)
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:509)
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:207)
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:168)
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:291)
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:92)
    at org.apache.hadoop.hbase.mapreduce.IdentityTableMapper.initJob(IdentityTableMapper.java:51)
    at org.apache.hadoop.hbase.mapreduce.Export.createSubmittableJob(Export.java:75)
    at org.apache.hadoop.hbase.mapreduce.Export.main(Export.java:187)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
    at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:153)
    at com.google.cloud.bigtable.mapreduce.Driver.main(Driver.java:35)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

Here's my ENV var set up in case it's helpful:

export HBASE_HOME=/home/hadoop/hbase-install
export HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
export HADOOP_HOME=/home/hadoop/hadoop-install

export HADOOP_CLIENT_OPTS="-Xbootclasspath/p:${HBASE_HOME}/lib/bigtable/alpn-boot-7.1.3.v20150130.jar"
export HADOOP_BIGTABLE_JAR=${HBASE_HOME}/lib/bigtable/bigtable-hbase-mapreduce-0.2.2-shaded.jar
export HADOOP_HBASE_JAR=${HBASE_HOME}/lib/hbase-server-1.1.2.jar

Also, when I try to run hbase shell and then list tables it just hangs and doesn't fetch me the list of tables. This is what happens:

~$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-install/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-install/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-02-09 00:02:01,334 INFO  [main] grpc.BigtableSession: Opening connection for projectId mystical-height-89421, zoneId us-central1-b, clusterId twitter-data, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
2016-02-09 00:02:01,358 INFO  [BigtableSession-startup-0] grpc.BigtableSession: gRPC is using the JDK provider (alpn-boot jar)
2016-02-09 00:02:01,648 INFO  [bigtable-connection-shared-executor-pool1-t2] io.RefreshingOAuth2CredentialsInterceptor: Refreshing the OAuth token
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2, rcc2b70cf03e3378800661ec5cab11eb43fafe0fc, Wed Aug 26 20:11:27 PDT 2015

hbase(main):001:0> list
TABLE

I've tried:

  • Double checking ALPN and ENV variables are appropriately set
  • Double checking hbase-site.xml and hbase-env.sh to make sure nothing looks wrong.

I also even tried connecting to my cluster (like I was previously able to following these directions) from ANOTHER gcloud instance, but it seems like I can't seem to get that to work now either...(it also hangs)

user@gcloud-instance:hbase-1.1.2$ bin/hbase shell
2016-02-09 00:07:03,506 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-02-09 00:07:03,913 INFO  [main] grpc.BigtableSession: Opening connection for projectId <project>, zoneId us-central1-b, clusterId <cluster>, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
2016-02-09 00:07:04,039 INFO  [BigtableSession-startup-0] grpc.BigtableSession: gRPC is using the JDK provider (alpn-boot jar)
2016-02-09 00:07:05,138 INFO  [Credentials-Refresh-0] io.RefreshingOAuth2CredentialsInterceptor: Refreshing the OAuth token
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2, rcc2b70cf03e3378800661ec5cab11eb43fafe0fc, Wed Aug 26 20:11:27 PDT 2015

hbase(main):001:0> list
TABLE
Feb 09, 2016 12:07:08 AM com.google.bigtable.repackaged.io.grpc.internal.TransportSet$1 run
INFO: Created transport com.google.bigtable.repackaged.io.grpc.netty.NettyClientTransport@7b480442(bigtabletableadmin.googleapis.com/64.233.183.219:443) for bigtabletableadmin.googleapis.com/64.233.183.219:443

Any ideas with what I'm doing wrong? Looks like an access issue - how do I fix it?

Thanks!

解决方案
  1. You can spin up a Dataproc cluster w/ Bigtable enabled following these instructions.

  2. ssh to the master by ./cluster.sh ssh

  3. hbase shell to verify that all is in order.

  4. hadoop jar ${HADOOP_BIGTABLE_JAR} export-table -libjars ${HADOOP_BIGTABLE_JAR} <table-name> gs://<bucket>/some-folder

  5. gsutil ls gs://<bucket>/some-folder/** and see if _SUCCESS exists. If so, the remaining files are your data.

  6. exit from your cluster master

  7. ./cluster.sh delete to get rid of the cluster, if you no longer require it.

You ran into a problem with the weekly java runtime update, that has been corrected.

这篇关于导出Google Cloud BigTable中的数据时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-07 09:09