hadoop安装,以启动辅助名称节点、节点管理器和资源管理器

mrfwxfqh  于 2021-07-13  发布在  Hadoop
关注(0)|答案(1)|浏览(420)

我已经在4台linux机器上安装了hadoop3.1.0集群,分别是hadoop1(master)、hadoop2、hadoop3和hadoop4。
我跑了 start-dfs.sh 以及 start-yarn.sh ,只看到namenodes和datanodes与一起运行 jps . 辅助名称节点、节点管理器和资源管理器失败。我尝试了一些解决方案,这就是我得到的。如何配置和启动辅助namenodes、nodemanagers和resroucemanagers?
关于次要名称节点日志

java.net.BindException: Port in use: hadoop1:9000
...
Caused by: java.net.BindException: Address already in use
...

关于节点管理器和资源管理器日志

2021-02-21 03:29:03,463 WARN org.eclipse.jetty.webapp.WebAppContext: Failed startup of context o.e.j.w.WebAppContext@51d719bc{/,file:///tmp/jetty-0.0.0.0-8042-node-_-any-8548809575065892553.dir/webapp/,UNAVAILABLE}{/node}
com.google.inject.ProvisionException: Unable to provision, see the following errors:

1) Error injecting constructor, java.lang.NoClassDefFoundError: javax/activation/DataSource
  at org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver.<init>(JAXBContextResolver.java:52)
  at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer$NMWebApp.setup(WebServer.java:153)
  while locating org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver

我有hdfs-site.xml

<property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>hadoop1:9000</value>
    </property>
    <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/app/hadoop/hadoop-3.1.0/name</value>
    </property>
    <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/app/hadoop/hadoop-3.1.0/data</value>
    </property>
    <property>
            <name>dfs.replication</name>
            <value>3</value>
    </property>
    <property>
            <name>dfs.webhdfs.enabled</name>
            <value>true</value>
    </property>

yarn-site.xml文件

<property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>
    <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
            <name>yarn.resourcemanager.address</name>
            <value>hadoop1:8032</value>
    </property>
    <property>
            <name>yarn.resourcemanager.scheduler.address</name>
            <value>hadoop1:8030</value>
    </property>
    <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>hadoop1:8031</value>
    </property>
    <property>
            <name>yarn.resourcemanager.admin.address</name>
            <value>hadoop1:8033</value>
    </property>
    <property>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>hadoop1:8088</value>
    </property>
    <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>1024</value>
    </property>
    <property>
            <name>yarn.nodemanager.resource.cpu-vcores</name>
            <value>1</value>
    </property>

工人

hadoop1
hadoop2 
hadoop3   
hadoop4

etc/主机

192.168.0.111 hadoop1
192.168.0.112 hadoop2
192.168.0.113 hadoop3
192.168.0.114 hadoop4

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
o4tp2gmn

o4tp2gmn1#

我安装了jdk15.0.2,它在hadoop3.1.0中出现了一些问题。后来我安装了jdk8并将javau改为home。一切顺利!
关于辅助节点管理器,我对fs.defaultfs和dfs.namenode.secondary.http-address都使用了hadoop1:9000,因此产生了冲突。我把中学换成了9001,一切顺利!

相关问题