无法在多节点hadoop群集设置中运行datanode,需要建议

vql8enpb  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(322)

我正在尝试设置一个多节点hadoop集群,但是datanode无法启动,需要hep。下面是细节。除此之外,没有其他设置。到目前为止,我只有一个数据节点和一个名称节点设置。

NAMENODE setup -
CORE-SITE.xml
<property>
  <name>fs.defult.name</name>
  <value>hdfs://192.168.1.7:9000</value>
 </property>

HDFS-SITE.XML

<property>
  <name>dfs.name.dir</name>
  <value>/data/namenode</value>
 </property>

DATANODE SETUP:

NAMENODE setup -
CORE-SITE.xml
<property>
  <name>fs.defult.name</name>
  <value>hdfs://192.168.1.7:9000</value>
 </property>

HDFS-SITE.XML

<property>
  <name>dfs.data.dir</name>
  <value>/data/datanode</value>
 </property>

When I run namenode it runs fine however when I try to run data node on other machine whos IP is 192.168.1.8 it fails and log says 

2017-05-13 21:26:27,744 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-05-13 21:26:27,862 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-05-13 21:26:32,908 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:34,979 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:36,041 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:37,093 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:38,162 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:39,238 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
@        

and datanode dies
Is anything there to setup ?
let me any other details required. Is there any other files to change? I am using centos7 to setup the env. I did formatting of namenode also more than 2-3 times, and also permissions are proper. Only connectivity issue however when I try to scp from master to slave (namenode to datanode) its works fine.
Suggest if there are any other setup to be done to make it successful!
pxq42qpu

pxq42qpu1#

配置的属性名中有一个输入错误。缺少“a”:fs.defult.name(vs fs.default.name)。

相关问题