启动Hive时报: Call From hadoop1/192.168.233.200 to hadoop1:9000 failed on connection exception

x33g5p2x  于2020-11-06 发布在 Hive  
字(4.3k)|赞(0)|评价(0)|浏览(1654)

昨天上午启动集群,发现master莫名其妙引导坏了,修了半天也修不好,无奈之下把原来搭的集群的master“嫁接”到这个集群里(动态节点增删)。
昨天shutdown now关机后,今天开机启动hive又不行了,

主要部分错误代码:

Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Call From hadoop1/192.168.233.200 to hadoop1:9nRefused
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:610)
	at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:553)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:750)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Call From hadoop1/192.168.233.200 to hadoop1:9000 failed on connection exception: java.net
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1479)
	at org.apache.hadoop.ipc.Client.call(Client.java:1412)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.j
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy30.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
	at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:708)
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:654)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586)
	... 9 more
Caused by: java.net.ConnectException: 拒绝连接
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
	at org.apache.hadoop.ipc.Client.call(Client.java:1451)
	... 29 more

我百度了一下,这个原因有很多,什么安全模式没关啊,hadoop集群没有启动,或者防火墙,selinux没关,集群处于安全模式啊之类的。
但是上述条件我都不符合,安全模式也关了,在master运行jps发现,没有Namenode主节点

关闭hadoop也显示no namenode to stop

分析到此,报错的元凶只能是它了↑↑↑↑↑↑↑↑↑↑,然后接下来就是解决hadoop启动没有Namenode:
如果没有Namenode的话,在浏览器访问master:50070也会访问失败。

解决过程:
切换到hadoop的core-site.xml下配置的hadoop.tmp.dir所指向的目录

cd $HADOOP_HOME/etc/hadoop
ls
vim core-site.xml

找到下图里头这个目录,然后将其删除后,重新建立

cd /usr/hadoop/hadoop-2.7.3/hdfs/tmp
rm -rf hdfs
mkdir -p hdfs/tmp

下一步,格式化namdenode

hadoop namenode -format

之后重新启动一下集群
再重新运行start-all.sh
可以看到Namenode已经出现在jps中,hive也可以正常运行。

相关文章

微信公众号

最新文章

更多