Centos 里面一定要安装fuser这个插件 命令 :
yum install psmisc
PS: 切记一定要安装,要不然namenode不能自动切换到active
集群说明
系统 | hostname | ip | hadoop版本 | java版本
--|--|--|--|--
CentOS Linux release 7.6.1810 (Core)|bigdata01|172.17.57.229|Hadoop 2.8.5|1.8.0_211
CentOS Linux release 7.6.1810 (Core)|bigdata02|172.17.57.234|Hadoop 2.8.5|1.8.0_211
CentOS Linux release 7.6.1810 (Core)|bigdata03|172.17.57.226|Hadoop 2.8.5|1.8.0_211
集群规划
hadoop01 | hadooop02 | hadoop03 | |
---|---|---|---|
HDFS | NameNodeDataNode | DataNode | SecondaryNameNodeDataNode |
YARN | NodeManager | ResourceManagerNodeManager | NodeManager |
[yetao_yang@bigdata03 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/yetao_yang/.ssh/id_rsa):
Created directory '/home/yetao_yang/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/yetao_yang/.ssh/id_rsa.
Your public key has been saved in /home/yetao_yang/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:yG2zOMq56cs5yK0S5NtfTDUNZxmYbOZ0T8IhyseYM1w yetao_yang@bigdata03
The key's randomart image is:
+---[RSA 2048]----+
| .E+=+ |
| o *B*= . |
| O==..+ |
| . . o=.. . |
|o o.S |
|.. oo o |
| ooo oo. |
|..o+o=.. |
| ..o%= |
+----[SHA256]-----+
[yetao_yang@bigdata03 ~]$ ssh-copy-id bigdata03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/yetao_yang/.ssh/id_rsa.pub"
The authenticity of host 'bigdata03 (172.17.57.226)' can't be established.
ECDSA key fingerprint is SHA256:6AGtLWnSlG5AfTew3b59FeREHNrZT5607KGfYhDa/VY.
ECDSA key fingerprint is MD5:1f:12:79:ac:d7:50:fc:0e:f7:e1:97:fe:2d:e8:7d:1b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
yetao_yang@bigdata03's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'bigdata03'"
and check to make sure that only the key(s) you wanted were added.
[yetao_yang@bigdata03 ~]$ ssh-copy-id bigdata02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/yetao_yang/.ssh/id_rsa.pub"
The authenticity of host 'bigdata02 (172.17.57.234)' can't be established.
ECDSA key fingerprint is SHA256:SNusOQIVlsyhQkpsNWZVRW58H6bfDwVm1M76yoTpifU.
ECDSA key fingerprint is MD5:97:4d:51:54:66:26:54:57:74:7c:3b:04:86:69:8d:bf.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
yetao_yang@bigdata02's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'bigdata02'"
and check to make sure that only the key(s) you wanted were added.
[yetao_yang@bigdata03 ~]$ ssh-copy-id bigdata01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/yetao_yang/.ssh/id_rsa.pub"
The authenticity of host 'bigdata01 (172.17.57.229)' can't be established.
ECDSA key fingerprint is SHA256:hBPvCOcvKn9x7hmeS31SoxDJQH4jVxWx3h3C3rgFj+o.
ECDSA key fingerprint is MD5:73:2f:cd:5d:6b:34:22:0f:37:d4:24:8e:44:2f:cb:a4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
yetao_yang@bigdata01's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'bigdata01'"
and check to make sure that only the key(s) you wanted were added.
bigdata02
bigdata03
也执行相同操作.bashrc
文件中配置 /home/yetao_yang/hadoop/hadoop-2.8.5
目录是下载hadoop对应的解压文件夹路径 export HADOOP_HOME=/home/yetao_yang/hadoop/hadoop-2.8.5
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH
source ~/.bashrc
hadoop version
修改hadoop-env.sh
文件里面的JAVA_HOME路径为绝对路径
echo $JAVA_HOME
查看路径配置`hdfs-site.xml
dfs.replication
3
dfs.nameservices
mycluster
dfs.ha.namenodes.mycluster
nn1,nn2
dfs.namenode.rpc-address.mycluster.nn1
bigdata01:8020
dfs.namenode.rpc-address.mycluster.nn2
bigdata03:8020
dfs.namenode.http-address.mycluster.nn1
bigdata01:50070
dfs.namenode.http-address.mycluster.nn2
bigdata03:50070
dfs.namenode.shared.edits.dir
qjournal://bigdata01:8485;bigdata02:8485;bigdata03:8485/mycluster
dfs.client.failover.proxy.provider.mycluster
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
dfs.ha.fencing.ssh.private-key-files
/home/yetao_yang/.ssh/id_rsa
dfs.ha.automatic-failover.enabled
true
```
* 配置`core-site.xml
````xml
fs.defaultFS
hdfs://mycluster
hadoop.tmp.dir
/home/yetao_yang/hadoop/data/hadoop/tmp
dfs.journalnode.edits.dir
/home/yetao_yang/hadoop/data/jn
ha.zookeeper.quorum
bigdata01:2181,bigdata02:2181,bigdata03:2181
```
* 配置`slaves
bigdata01
bigdata02
bigdata03
### 配置YARN的高可用
* 修改`yarn-env.sh`文件里面的JAVA_HOME路径为绝对路径
* 使用`echo $JAVA_HOME`查看路径
* 修改`yarn-site.xml`文件
```xml
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.resourcemanager.ha.enabled
true
yarn.resourcemanager.cluster-id
cluster-yarn
yarn.resourcemanager.ha.rm-ids
rm1,rm2
yarn.resourcemanager.hostname.rm1
bigdata01
yarn.resourcemanager.hostname.rm2
bigdata02
yarn.resourcemanager.webapp.address.rm1
bigdata01:8088
yarn.resourcemanager.webapp.address.rm2
bigdata02:8088
yarn.resourcemanager.recovery.enabled
true
yarn.resourcemanager.zk-address
bigdata01:2181,bigdata02:2181,bigdata03:2181
yarn.resourcemanager.store.class
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
mapred-env.sh
文件里面的JAVA_HOME路径为绝对路径 echo $JAVA_HOME
查看路径mapred-site.xml.template
文件为mapred-site.xml
并添加配置 mapreduce.framework.name
yarn
yarn-site.xml
文件里面添加如下配置
yarn.log.server.url
http://bigdata01:19888/jobhistory/logs/
yarn.log-aggregation-enable
true
yarn.log-aggregation.retain-seconds
604800
mapred-site.xml
文件里面添加如下配置
mapreduce.jobhistory.address
bigdata01:10020
mapreduce.jobhistory.webapp.address
bigdata01:19888
JournalNode
节点上启动journalnode
服务 sbin/hadoop-daemon.sh start journalnode
bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode
bin/hdfs namenode -bootstrapStandby
sbin/stop-dfs.sh
bin/zkServer.sh start
bin/hdfs zkfc -formatZK
以上步骤只需操作一次
sbin/start-dfs.sh
sbin/hadoop-daemon.sh start zkfc
sbin/start-yarn.sh
sbin/yarn-daemon.sh start resourcemanager
bin/yarn rmadmin -getServiceState rm1
mr-jobhistory-daemon.sh start historyserver
[yetao_yang@bigdata01 hadoop]$ jps
19697 JournalNode
19889 DFSZKFailoverController
19490 DataNode
21683 JobHistoryServer
19995 ResourceManager
20539 NameNode
6635 Jps
16126 QuorumPeerMain
20111 NodeManager
[yetao_yang@bigdata02 hadoop]$ jps
17841 JournalNode
17937 NodeManager
18273 ResourceManager
25236 JobHistoryServer
17738 DataNode
15964 QuorumPeerMain
26013 Jps
[yetao_yang@bigdata03 hadoop]$ jps
29266 Jps
18213 JournalNode
18341 DFSZKFailoverController
18422 NodeManager
15950 QuorumPeerMain
18110 DataNode
18031 NameNode
HDFS: 分别进入两台namenode的50070页面查看状态然后在命令行界面把状态为
active
的namenode进程杀掉再刷新状态为standby
的namenode是否变成了active
YARN: 查看两台rm的8088界面,是否最终都跳转到了一个地址
内容来源于网络,如有侵权,请联系作者删除!