hadoop2.6.0 ha主名称节点启动失败

qgelzfjb  于 2021-07-15  发布在  Hadoop
关注(0)|答案(0)|浏览(231)

错误信息
端口信息
zk&namenode格式全部成功。
我检查了端口,没有任务程序占用这个端口。
当我第一次启动namenode时,我发现了这个问题;在我删除tmp文件夹并多次重新格式化之后,日志一直在说matser namenode无法启动:port in use 50070。下面是我的hdfs-site.xml配置:
hadoop1(主):

<property>
       <name>dfs.replication</name>
       <value>3</value>
   </property>
   <property>
       <name>dfs.nameservices</name>
       <value>gy-cluster</value>
   </property>
   <property>
       <name>dfs.ha.namenodes.gy-cluster</name>
       <value>nn1,nn2</value>
   </property>
   <property>
       <name>dfs.namenode.rpc-address.gy-cluster.nn1</name>
       <value>hadoop1:9000</value>
   </property>
   <property>
       <name>dfs.namenode.rpc-address.gy-cluster.nn2</name>
       <value>hadoop2:9000</value>
   </property>
   <property>
       <name>dfs.namenode.http-address.gy-cluster.nn1</name>
       <value>hadoop1:50070</value>
   </property>
   <property>
       <name>dfs.namenode.http-address.gy-cluster.nn2</name>
       <value>hadoop2:50070</value>
   </property>
   <property>
       <name>dfs.namenode.shared.edits.dir</name> 
       <value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/gy-cluster</value>
   </property>
   <property>
       <name>dfs.client.failover.proxy.provider.gy-cluster</name>
       <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
   </property>
   <property>
      <name>dfs.ha.fencing.methods</name>
      <value>shell(/bin/true)</value>
   </property>
   <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/root/.ssh/id_rsa</value>
   </property>
   <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/hadoop/hdfs/tmp/journal</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
   </property>
   <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/home/hadoop/hdfs/name</value>
   </property>
   <property>   
       <name>dfs.datanode.data.dir</name>   
       <value>file:/home/hadoop/hdfs/data</value>  
   </property>
   <property>
       <name>dfs.webhdfs.enabled</name>
       <value>true</value>
   </property> 
   <property>
       <name>dfs.permissions.enable</name>
       <value>false</value>
   </property>
   <property>
       <name>dfs.ha.namenode.id</name>
       <value>nn1</value>
   </property>

hadoop2(主)

<property>
       <name>dfs.replication</name>
       <value>3</value>
   </property>
   <property>
       <name>dfs.nameservices</name>
       <value>gy-cluster</value>
   </property>
   <property>
       <name>dfs.ha.namenodes.gy-cluster</name>
       <value>nn1,nn2</value>
   </property>
   <property>
       <name>dfs.namenode.rpc-address.gy-cluster.nn1</name>
       <value>hadoop1:9000</value>
   </property>
   <property>
       <name>dfs.namenode.rpc-address.gy-cluster.nn2</name>
       <value>hadoop2:9000</value>
   </property>
   <property>
       <name>dfs.namenode.http-address.gy-cluster.nn1</name>
       <value>hadoop1:50070</value>
   </property>
   <property>
       <name>dfs.namenode.http-address.gy-cluster.nn2</name>
       <value>hadoop2:50070</value>
   </property>
   <property>
       <name>dfs.namenode.shared.edits.dir</name> 
       <value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/gy-cluster</value>
   </property>
   <property>
       <name>dfs.client.failover.proxy.provider.gy-cluster</name>
       <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
   </property>
   <property>
      <name>dfs.ha.fencing.methods</name>
      <value>shell(/bin/true)</value>
   </property>
   <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/root/.ssh/id_rsa</value>
   </property>
   <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/hadoop/hdfs/tmp/journal</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
   </property>
   <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/home/hadoop/hdfs/name</value>
   </property>
   <property>   
       <name>dfs.datanode.data.dir</name>   
       <value>file:/home/hadoop/hdfs/data</value>  
   </property>
   <property>
       <name>dfs.webhdfs.enabled</name>
       <value>true</value>
   </property> 
   <property>
       <name>dfs.permissions.enable</name>
       <value>false</value>
   </property>
   <property>
       <name>dfs.ha.namenode.id</name>
       <value>nn2</value>
   </property>

hadoop3(从属)

<property>
           <name>dfs.replication</name>
           <value>3</value>
       </property>
       <property>
           <name>dfs.nameservices</name>
           <value>gy-cluster</value>
       </property>
       <property>
           <name>dfs.ha.namenodes.gy-cluster</name>
           <value>nn1,nn2</value>
       </property>
       <property>
           <name>dfs.namenode.rpc-address.gy-cluster.nn1</name>
           <value>hadoop1:9000</value>
       </property>
       <property>
           <name>dfs.namenode.rpc-address.gy-cluster.nn2</name>
           <value>hadoop2:9000</value>
       </property>
       <property>
           <name>dfs.namenode.http-address.gy-cluster.nn1</name>
           <value>hadoop1:50070</value>
       </property>
       <property>
           <name>dfs.namenode.http-address.gy-cluster.nn2</name>
           <value>hadoop2:50070</value>
       </property>
       <property>
           <name>dfs.namenode.shared.edits.dir</name> 
           <value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/gy-cluster</value>
       </property>
       <property>
           <name>dfs.client.failover.proxy.provider.gy-cluster</name>
           <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
       </property>
       <property>
          <name>dfs.ha.fencing.methods</name>
          <value>shell(/bin/true)</value>
       </property>
       <property>
          <name>dfs.ha.fencing.ssh.private-key-files</name>
          <value>/root/.ssh/id_rsa</value>
       </property>
       <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/home/hadoop/hdfs/tmp/journal</value>
        </property>
        <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
       </property>
       <property>
           <name>dfs.namenode.name.dir</name>
           <value>file:/home/hadoop/hdfs/name</value>
       </property>
       <property>   
           <name>dfs.datanode.data.dir</name>   
           <value>file:/home/hadoop/hdfs/data</value>  
       </property>
       <property>
           <name>dfs.webhdfs.enabled</name>
           <value>true</value>
       </property> 
       <property>
           <name>dfs.permissions.enable</name>
           <value>false</value>
       </property>

如果有人知道这个问题,请告诉我,非常感谢。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题