hadoop3:连接到resourcemanager的工作节点错误

huus2vyu  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(434)

我有一个3节点的hadoop集群(digitalocean):
hadoop主机配置为namenode和datanode
hadoop-worker1和hadoop-worker2配置为数据节点
每当我运行mapreduce流式处理作业,并且工作节点被选中运行applicationmaster时,该作业就会挂起,因为它试图连接到resourcemanager。datanode日志显示它试图连接到0.0.0.0

INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s);
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s);

这是 yarn.resourcemanager.hostname property . 但是,我已在yarn-site.xml中为我的两个工作节点指定了此属性:

<property>
   <name>yarn.resourcemanager.hostname</name>
   <value>hadoop-master</value>
</property>

在我的所有节点上,我的/etc/hosts文件都是这样的,所以hadoop master应该指向正确的ip地址。


# 127.0.1.1 hadoop-worker1 hadoop-worker1

127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts

# ::1 ip6-localhost ip6-loopback

# fe00::0 ip6-localnet

# ff00::0 ip6-mcastprefix

# ff02::1 ip6-allnodes

# ff02::2 ip6-allrouters

# ff02::3 ip6-allhosts

165.22.19.161 hadoop-master
165.22.19.154 hadoop-worker1
165.22.19.158 hadoop-worker2

我还通过访问hadoop-worker1:9864并访问worker节点的web界面查看加载的内容来检查配置:

<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
<final>false</final>
<source>yarn-site.xml</source>
</property>

此外,我还尝试从一个workerNode运行一个yarn命令,它实际上可以正确地联系resourcemanager:

hadoop@hadoop-worker1:/opt/hadoop$ yarn node --list
2019-06-15 18:47:42,119 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/165.22.19.161:8032
Total Nodes:3
         Node-Id             Node-State Node-Http-Address       Number-of-Running-Containers
hadoop-worker2:40673            RUNNING hadoop-worker2:8042                                0
hadoop-worker1:41875            RUNNING hadoop-worker1:8042                                1
hadoop-master:40075             RUNNING hadoop-master:8042                                 0
hadoop@hadoop-worker1:/opt/hadoop$

我不知道该怎么办,我相信这可能与流媒体工作没有正确加载设置,任何帮助将不胜感激,因为我已经在这个问题上坚持了2天。

56lgkhnf

56lgkhnf1#

我添加了 -D yarn.resourcemanager.hostname=hadoop-master 标记为mapred streaming命令,它现在似乎可以工作了。

相关问题