连接到本地hbase时出错

qybjjes1  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(360)

我正试图通过一个java小程序连接到本地系统中安装的hbase(使用hortonworks 1.1.1.16),该程序执行下一个命令:

HBaseAdmin.checkHBaseAvailable(conf);

值得一提的是,使用 hbase 命令。
宿主文件的内容是下一个(其中 example.com 包含实际主机名):

127.0.0.1 localhost example.com

hbase配置为在独立模式下工作:

hbase.cluster.distributed=false

执行程序时,将引发下一个异常:

13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_19
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.19.x86_64/jre
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.class.path=[...]
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.2.1.el6.x86_64
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.name=root
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.dir=/root/git/project
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=example.com:2181 sessionTimeout=60000 watcher=hconnection-0x678e4593
13/05/13 15:18:29 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is hconnection-0x678e4593
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x13e9d6851af0046, negotiated timeout = 40000
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: ClusterId is cccadf06-f6bf-492e-8a39-e8beac521ce6
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 1 of 1 failed; no more retrying.
com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
    at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:149)
    at com.sun.proxy.$Proxy5.isMasterRunning(Unknown Source)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterInterface(HConnectionManager.java:732)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:764)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterProtocol(HConnectionManager.java:1724)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterMonitor(HConnectionManager.java:1757)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:837)
    at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2010)
    at TestHBase.main(TestHBase.java:37)
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:94)
    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
    at java.io.DataOutputStream.flush(DataOutputStream.java:123)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.writeConnectionHeader(HBaseClient.java:896)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:847)
    at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1414)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1299)
    at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:131)
    ... 8 more
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x13e9d6851af0046
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Session: 0x13e9d6851af0046 closed
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: EventThread shut down
org.apache.hadoop.hbase.exceptions.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:793)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterProtocol(HConnectionManager.java:1724)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterMonitor(HConnectionManager.java:1757)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:837)
    at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2010)
    at TestHBase.main(TestHBase.java:37)
Caused by: com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
    at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:149)
    at com.sun.proxy.$Proxy5.isMasterRunning(Unknown Source)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterInterface(HConnectionManager.java:732)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:764)
    ... 5 more
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:94)
    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
    at java.io.DataOutputStream.flush(DataOutputStream.java:123)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.writeConnectionHeader(HBaseClient.java:896)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:847)
    at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1414)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1299)
    at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:131)
    ... 8 more

这个痕迹提供了一些实际发生的事情的证据。似乎已经建立了与zookeeper的连接,但是当尝试访问主控时出现了故障。
虽然我花了几个小时试图在谷歌上找到一个解决方案,但我似乎并不例外。特别是,这个例外在两个方面与互联网上发现的大多数不同:
似乎每个人都有错误 getMaster attempt 0 of 1 failed 而不是 getMaster attempt 1 of 1 failed . 我不知道这是否有意义,但我觉得有点奇怪。
我找不到其他人 Broken pipe 错误。
顺便说一句,在hortonworks管理控制台中可以看到,master实际上正在运行。
查看最新日志时,输出如下:

2013-05-13 15:30:07,192 WARN org.apache.hadoop.ipc.HBaseServer: Incorrect header or version mismatch from 127.0.0.1:40788 got version 0 expected version 3

由于这是一个警告而不是一个错误,我不知道这是否与实际问题有关。每次执行时端口都不同。
非常感谢你的时间,我希望你能给这个问题带来一些启示。
当做

bihw5rsg

bihw5rsg1#

最后我们发现了问题并解决了它。原来是个依赖性问题。我们用的是 hbase-0.95.0 以及 hbase-client-0.95.0 . 使用 hbase-0.94.7 或者 hbase-0.94.9 似乎有用。
然而,即使在hbase库的那个版本中,在某些情况下也会出现一些问题。特别是,在应用服务器(jbossas7)中运行它时会出现一些问题。最后,所有的问题似乎都可以通过移除依赖关系来解决 hbase-client-0.95.0 ,并替换为 haboop-core-1.1.2 因为hbase库中不包含的某些类是必需的。
当做。

vhipe2zx

vhipe2zx2#

我建议您首先检查您的hbase主/区域服务器端口是否与 netstat -n -a . 我遇到过这样的情况:hbase主ipc只绑定到外部ip(这是cloudera cdh),无法通过127.0.0.1访问它。看起来最有可能的情况是hbase shell在这种情况下仍然可以工作。
另一个可能的原因可能是以前的集群崩溃和一些hdfs数据损坏。在这种情况下,hbase实际上不会开始等待hdfs退出安全模式。但这看起来不是你的案子。如果是,您可以手动强制hdfs从控制台退出安全模式,然后执行以下操作 fsck 对于hadoop和hbase的类似过程。

相关问题