问题描述
我正在使用saveAsNewAPIHadoopDataset将puts RDD保存到Hbase.以下是我的职位创建和提交.
I am saving a puts RDD to Hbase using saveAsNewAPIHadoopDataset. Below is my job creation and submition.
val outputTableName = "test3"
val conf2 = HBaseConfiguration.create()
conf2.set("hbase.zookeeper.quorum", "xx.xx.xx.xx")
conf2.set("hbase.mapred.outputtable", outputTableName)
conf2.set("mapreduce.outputformat.class", "org.apache.hadoop.hbase.mapreduce.TableOutputFormat")
val job = createJob(outputTableName, conf2)
val outputTable = sc.broadcast(outputTableName)
val hbasePuts = simpleRdd.map(k => convertToPut(k, outputTable))
hbasePuts.saveAsNewAPIHadoopDataset(job.getConfiguration)
这是我的工作创造功能
def createJob(table: String, conf: Configuration): Job = {
conf.set(TableOutputFormat.OUTPUT_TABLE, table)
val job = Job.getInstance(conf, this.getClass.getName.split('$')(0))
job.setOutputFormatClass(classOf[TableOutputFormat[String]])
job
}
此功能以Hbase格式转换数据
This function converts data in Hbase format
def convertToPut(k: (String, String, String), outputTable: Broadcast[String]): (ImmutableBytesWritable, Put) = {
val rowkey = k._1
val put = new Put(Bytes.toBytes(rowkey))
val one = Bytes.toBytes("cf1")
val two = Bytes.toBytes("cf2")
put.addColumn(one, Bytes.toBytes("a"), Bytes.toBytes(k._2))
put.addColumn(two, Bytes.toBytes("a"), Bytes.toBytes(k._3))
(new ImmutableBytesWritable(Bytes.toBytes(outputTable.value)), put)
}
这是我在第125行得到的错误,它是:hbasePuts.saveAsNewAPIHadoopDataset(job.getConfiguration)
This is the error i am getting at line 125 which is :hbasePuts.saveAsNewAPIHadoopDataset(job.getConfiguration)
Exception in thread "main" java.lang.NullPointerException
at org.apache.hadoop.hbase.security.UserProvider.instantiate(UserProvider.java:122)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:214)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.checkOutputSpecs(TableOutputFormat.java:177)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1099)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1085)
at ScalaSpark$.main(ScalaSpark.scala:125)
at ScalaSpark.main(ScalaSpark.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
推荐答案
我遇到了同样的问题.我认为org.apache.hadoop.hbase.mapreduce.TableOutputFormat类中存在错误.
I have encountered the same problem.I think there is a bug in org.apache.hadoop.hbase.mapreduce.TableOutputFormat class.
TableOutputFormat的原始代码如下:
TableOutputFormat original code is below:
public void checkOutputSpecs(JobContext context) throws IOException,
InterruptedException {
try (Admin admin = ConnectionFactory.createConnection(getConf()).getAdmin()) {
TableName tableName = TableName.valueOf(this.conf.get(OUTPUT_TABLE));
if (!admin.tableExists(tableName)) {
throw new TableNotFoundException("Can't write, table does not exist:" +
tableName.getNameAsString());
}
if (!admin.isTableEnabled(tableName)) {
throw new TableNotEnabledException("Can't write, table is not enabled: " +
tableName.getNameAsString());
}
}
}
如果我将其修复如下:
public void checkOutputSpecs(JobContext context) throws IOException,
InterruptedException {
//set conf by context parameter
setConf(context.getConfiguration());
try (Admin admin = ConnectionFactory.createConnection(getConf()).getAdmin()) {
TableName tableName = TableName.valueOf(this.conf.get(OUTPUT_TABLE));
if (!admin.tableExists(tableName)) {
throw new TableNotFoundException("Can't write, table does not exist:" +
tableName.getNameAsString());
}
if (!admin.isTableEnabled(tableName)) {
throw new TableNotEnabledException("Can't write, table is not enabled: " +
tableName.getNameAsString());
}
}
}
我的问题解决了.
另一种解决方案是将 spark.hadoop.validateOutputSpecs 在创建SparkSession
时关闭.
Another solution is to turn spark.hadoop.validateOutputSpecs off when creating SparkSession
.
val session = SparkSession.builder()
.config("spark.hadoop.validateOutputSpecs", false)
.getOrCreate()
这篇关于在Scala Spark2中将saveAsNewAPIHadoopDataset运行到HBase时获取空指针异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!