bigtable导入错误

thigvfpy  于 2021-06-09  发布在  Hbase
关注(0)|答案(1)|浏览(272)

我使用配置单元生成了一个序列文件,并尝试在bigtable中导入它,导入作业失败,错误如下。

2015-06-21 00:05:42,584 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1434843251631_0007_m_000000_1: Error: java.lang.ClassCastException: org.apache.hadoop.io.BytesWritable cannot be cast to org.apache.hadoop.hbase.io.ImmutableBytesWritable
at com.google.cloud.bigtable.mapreduce.Import$Importer.map(Import.java:127)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

我使用下面的配置单元表定义和参数来生成序列文件。

create table ilv_bigtable_test(
item_id int
)
stored as sequencefile
LOCATION 'gs://zxx/xx/aa1_temp/ilv_bigtable_test/'
TBLPROPERTIES('serialization.null.format'='')
;

SET hive.exec.compress.output=true;
SET mapred.max.split.size=256000000;
SET mapred.output.compression.type=BLOCK;

insert overwrite table ilv_bigtable_test
select 
item_id
FROM xxx
;

下面是hbase create table语句

create 'test', 'item_id'
wn9m85ua

wn9m85ua1#

您需要一个更高版本的apachehive–1.1.0是第一个支持hbase 1.0的版本,cloud bigtable需要这个版本。你至少可以试试Hive1.2.1。

相关问题