文章23 | 阅读 9515 | 点赞0
tar -zxvf flink-1.6.4-bin-hadoop27-scala_2.11.tgz -C /opt/
mv /opt/flink-1.6.4/ /opt/flink
vi /opt/flink/conf/flink-conf.yaml # 1
jobmanager.rpc.address: flink01
vi /opt/flink/conf/masters # 2 || echo "flink01:8081" >/opt/flink/conf/masters
flink01:8081
vi /opt/flink/conf/slaves # 3
flink02
flink03
分发文件
切记:不同服务器之间的flink文件夹位置保持一致!
scp -r /opt/flink/ root@flink02:/opt/
scp -r /opt/flink/ root@flink03:/opt/
以下步骤在集群节点均需配置
echo "export FLINK_BIN_DIR=/opt/flink/bin" >>/etc/profile \
&& echo "export PATH=\$PATH:\$FLINK_BIN_DIR" >>/etc/profile \
&& source /etc/profile
启动集群
/opt/flink/bin/start-cluster.sh
关闭集群
/opt/flink/bin/stop-cluster.sh
单节点启动与停止
/opt/flink/bin/jobmanager.sh ((start|start-foreground) cluster)|stop|stop-all
/opt/flink/bin/taskmanager.sh start|start-foreground|stop|stop-all
运行测试任务(启动Hadoop)
/opt/flink/bin/flink run /opt/flink/examples/batch/WordCount.jar --input hdfs://flink01:9000/input/part1rfid0901.txt --output hdfs://flink01:9000/output/part1rfid0901.output.txt
Hadoop:http://flink01:50070/
内容来源于网络,如有侵权,请联系作者删除!