使用mapper/reducer添加jar文件后hbase未启动

35g0bw71  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(224)

我正在尝试为hbase编写一个Map器/缩减器,并添加了jar。但是在lib目录中添加jar文件之后,我无法启动hbase。我想调试出什么问题了?如何更改日志级别?有帮助吗?例外情况如下:
java.lang.runtimeexception:在org.apache.hadoop.hbase.util.jvmclusterutil.createmasterthread(jvmclusterutil)处构建master:class org.apache.hadoop.hbase.master.hmastercommandline$localhmaster失败。java:143)位于org.apache.hadoop.hbase.localhbasecluster.addmaster(localhbasecluster)。java:217)在org.apache.hadoop.hbase.localhbasecluster.(localhbasecluster。java:153)在org.apache.hadoop.hbase.master.hmastercommandline.startmaster(hmastercommandline。java:224)在org.apache.hadoop.hbase.master.hmastercommandline.run(hmastercommandline。java:139)在org.apache.hadoop.util.toolrunner.run(toolrunner。java:70)在org.apache.hadoop.hbase.util.servercommandline.domain(servercommandline。java:126)在org.apache.hadoop.hbase.master.hmaster.main(hmaster。java:2290)原因:java.lang.nosuchmethoderror:org.apache.hadoop.ipc.rpc.getprotocolproxy(ljava/lang/class;jljava/net/inetsocketaddress;lorg/apache/hadoop/security/usergroupinformation;lorg/apache/hadoop/conf/configuration;ljavax/net/socketfactory;ilorg/apache/hadoop/io/retry/retrypolicy;ljava/util/concurrent/atomic/atomicboolean;)lorg/apache/hadoop/ipc/protocolproxy;在org.apache.hadoop.hdfs.namenodeproxies.creatennproxywithclientprotocol(namenodeproxies。java:420)在org.apache.hadoop.hdfs.namenodeproxies.createnonhaproxy(namenodeproxies。java:316)在org.apache.hadoop.hdfs.namenodeproxies.createproxy(namenodeproxies。java:178)在org.apache.hadoop.hdfs.dfsclient。java:665)在org.apache.hadoop.hdfs.dfsclient.(dfsclient。java:601)位于org.apache.hadoop.hdfs.distributedfilesystem.initialize(distributedfilesystem)。java:148)在org.apache.hadoop.fs.filesystem.createfilesystem(filesystem。java:2591)在org.apache.hadoop.fs.filesystem.access$200(文件系统)。java:89)在org.apache.hadoop.fs.filesystem$cache.getinternal(文件系统)。java:2625)在org.apache.hadoop.fs.filesystem$cache.get(filesystem。java:2607)在org.apache.hadoop.fs.filesystem.get(filesystem。java:368)在org.apache.hadoop.fs.path.getfilesystem(path。java:296)在org.apache.hadoop.hbase.util.fsutils.getrootdir(fsutils。java:1004)在org.apache.hadoop.hbase.regionserver.hregionserver.(hregionserver。java:562)在org.apache.hadoop.hbase.master.hmaster。java:364)在org.apache.hadoop.hbase.master.hmastercommandline$localhmaster.(hmastercommandline。java:307)位于sun.reflect.nativeconstructoraccessorimpl.newinstance0(本机方法)sun.reflect.nativeconstructoraccessorimpl.newinstance(nativeconstructoraccessorimpl。java:62)在sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl。java:45)在java.lang.reflect.constructor.newinstance(constructor。java:408)在org.apache.hadoop.hbase.util.jvmclusterutil.createmasterthread(jvmclusterutil。java:139) ... 7个以上

ehxuflar

ehxuflar1#

因此,这个错误似乎是由于hbase的lib目录(hadoop--2.5.1)中的hadoop库与我实际安装的hadoop(hadoop--2.6.0)不匹配造成的。我的jar正在寻找在hadoop库的旧版本中找不到的类,因为它失败了。这个回答使我意识到了这个问题。在我复制lib目录中的所有hadoop-*-2.6.0jar之后,hbase按预期启动了。hbase-hadoop兼容性文档中也提到了这一点。

相关问题