hbase导入错误

gk7wooem  于 2021-06-09  发布在  Hbase
关注(0)|答案(1)|浏览(326)

我正在尝试导入一个表,该表是从运行0.98.4的另一个hbase导出的。我出口如下-

hbase org.apache.hadoop.hbase.mapreduce.Driver export 'tblname' /path/

我正在尝试导入这个表,它已经使用hadoop fs-put放入hdfs。当我运行下面的import命令时,它给出了一个错误-
hbase org.apache.hadoop.hbase.mapreduce.driver导入'tblname'/hdfs/path

2015-06-24 02:19:24,492 ERROR [main] security.UserGroupInformation: PriviledgedActionException as:deeshank (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: hdfs://localhost:54310/home/deeshank/DB/hbase_home/lib/hadoop-mapreduce-client-core-2.2.0.jar
Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://localhost:54310/home/deeshank/DB/hbase_home/lib/hadoop-mapreduce-client-core-2.2.0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
at org.apache.hadoop.hbase.mapreduce.Import.main(Import.java:535)

我不确定是什么导致了这个问题。我正在运行haddop-2.6.0版本。

z5btuh9x

z5btuh9x1#

hdfs://localhost:54310/是hadoop hdfs地址。您可以更改应用程序中的属性或在hdfs上上载jar。
显示linux文件系统的ls命令,可以使用以下命令:
“hdfs dfs-lshdfs://localhost:9000/"
但是hdfs://localhost:9000/是hadoop hdfs文件系统的地址。

相关问题