hadoop:连接到resourcemanager失败

vdzxcuhz  于 2021-06-04  发布在  Hadoop
关注(0)|答案(10)|浏览(575)

在安装hadoop2.2并尝试启动pipes示例之后,我得到了以下错误(在尝试启动之后也会出现相同的错误 hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount someFile.txt /out ):

/usr/local/hadoop$ hadoop pipes -Dhadoop.pipes.java.recordreader=true -Dhadoop.pipes.java.recordwriter=true -input someFile.txt -output /out -program bin/wordcount
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.

13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/14 20:12:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:08 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:09 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:10 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:11 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:12 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:13 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:14 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

my-site.xml:

<configuration>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- Site specific YARN configuration properties -->
</configuration>

core-site.xml:

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

mapred-site.xml:

<configuration>
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
</configuration>

hdfs-site.xml:

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
</configuration>

我发现我的ipv6被禁用了。可能我的/etc/主机不正确?
/etc/主机:

fe00::0         ip6-localnet
ff00::0         ip6-mcastprefix
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

127.0.0.1 localhost.localdomain localhost hduser

# Auto-generated hostname. Please do not remove this comment.

79.98.30.76 356114.s.dedikuoti.lt  356114
::1             localhost ip6-localhost ip6-loopback
g2ieeal7

g2ieeal71#

在我的例子中,我的xml配置文件中有一个输入错误。您可以在$hadoop\u home/logs/rdmahb resourcemanager-(yourhostname).log查看日志,这里可能有一个有用的stacktrack。

ao218c7q

ao218c7q2#

Configuration conf = HBaseConfiguration.create();
 conf.set("yarn.resourcemanager.address", "127.0.0.1:8032");

在conf中可以设置yarn.resourcemanager.address

3duebb1j

3duebb1j3#

我通过将值更改为 127.0.0.1:*yarn-site.xml ```

yarn.nodemanager.aux-services
mapreduce_shuffle


yarn.resourcemanager.address
127.0.0.1:8032


yarn.resourcemanager.scheduler.address
127.0.0.1:8030


yarn.resourcemanager.resource-tracker.address
127.0.0.1:8031

jhkqcmku

jhkqcmku4#

我也面临同样的问题。我解决了。
因为连接到 ResourceManager ,因此,确保Yarn是否在运行。Yarn被分成不同的实体。其中之一就是 ResourceManager 它负责将资源分配给集群中运行的各种应用程序。
执行以下步骤。
使用以下命令开始纺纱:start-yarn.sh
使用命令jps检查资源管理器nod
将以下代码添加到配置中

<property>
    <name>yarn.resourcemanager.address</name>
    <value>127.0.0.1:8032</value>
</property>
7uhlpewt

7uhlpewt5#

确保你已经开始了。使用以下命令启动:
开始-Yarn.sh
然后使用此命令验证资源管理器是否正在运行:
日本
输出应该如下所示:
17542名称节点
17920第二名称节点
22064日元
17703数据节点
18226资源经理
18363节点管理员

n9vozmp4

n9vozmp46#

发生此错误的原因是资源管理器无法启动。如果您已经做了其他人所说的更改配置文件,但仍然得到错误,那么请参考它。
note:- windows 10,hadoop 3.1.3已验证
因此,如果您是windows用户,请转到hadoop-3.1.3/sbin/。执行stop-all.cmd,然后执行start-all.cmd
许多终端现在已经打开,它们是nodemanager、datanode、namenode和resourcemanager。有关资源管理器终端中的错误消息,请参阅。这个错误应该是你的问题。
如果错误消息类似this:-
noclassdeffounderror:org/apache/hadoop/yarn/server/timelineservice/collector/timelinecollectormanager
复制以下文件
来自~/hadoop-3.1.3/share/hadoop/yarn/timelineservice
文件:hadoop-yarn-server-timelineservice-3.1.3.jar
至~/hadoop-3.1.3/share/hadoop/yarn
这应该能解决你的问题。

4ngedf3f

4ngedf3f7#

此问题可能是由于缺少hadoop\u conf\u dir,mapreduce应用程序需要该dir来连接到yarn-site.xml中提到的资源管理器。因此,在运行mapreduce作业之前,请尝试使用适当的hadoop conf目录手动设置/导出hadoop conf dir,例如export hadoop conf dir=/etc/hadoop/conf

dy1byipe

dy1byipe8#

连接资源管理器时出现问题是因为我需要向yarn-site.xml添加一些属性:

<property>
<name>yarn.resourcemanager.address</name>
<value>127.0.0.1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>127.0.0.1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>127.0.0.1:8031</value>
</property>

然而,我的工作并没有运转,但现在连接是成功的

9cbw7uwe

9cbw7uwe9#

正确的方法可能是在yarn-site.xml中添加以下行:

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>127.0.0.1</value>
</property>

因为值字段host表示可以设置的单个主机名,而不是设置所有yarn.resourcemanager地址资源。为resourcemanager组件生成默认端口。
apache hadoop 2.7.1-resourcemanager配置
参数:yarn.resourcemanager.hostname
值:resourcemanager主机。
注意:可以设置一个主机名来代替设置所有yarn.resourcemanager
地址资源。为resourcemanager组件生成默认端口。

wfveoks0

wfveoks010#

在中使用以下设置 /etc/hosts ,将主机名添加到 your_host_name :

127.0.0.1   localhost
127.0.0.1   your_host_name

相关问题