Redis-cluster搭建流程

x33g5p2x  于2021-03-14 发布在 Redis  
字(7.0k)|赞(0)|评价(0)|浏览(430)

环境准备

在3台服务器上面,要创建6个redis实例(三主三从),每个服务器,分布创建2个数据目录和2个配置文件。

  • 集群主节点
    19.19.92.138.:7001
    19.19.92.39:7003
    19.19.92.73:7005
  • 集群从节点
    19.19.92.138:7002
    19.19.92.39:7004
    19.19.92.73:7006

安装编译环境

yum install gcc-c++ -y

编译redis

wget http://download.redis.io/releases/redis-5.0.5.tar.gz 
tar xzf redis-5.0.5.tar.gz
cd redis-5.0.5
make

创建配置文件

由于每台机器上需要启动一个master一个slave,所以需要在不同的目录创建两个配置文件
19.19.92.138上创建

/usr/local/redis-cluste/7001/redis.conf
/usr/local/redis-cluste/7002/redis.conf

19.19.92.39上创建

/usr/local/redis-cluste/7003/redis.conf
/usr/local/redis-cluste/7004/redis.conf

19.19.92.73上创建

/usr/local/redis-cluste/7005/redis.conf
/usr/local/redis-cluste/7006/redis.conf

配置文件内容为:

port 7001  #端口
cluster-enabled yes #启用集群模式
cluster-config-file nodes.conf
cluster-node-timeout 5000 #超时时间
appendonly yes
daemonize yes #后台运行
protected-mode no #非保护模式
pidfile  /var/run/redis_7001.pid

其中port和pidfile需要随着文件夹不同进行调整

启动节点

/root/redis-5.0.5/src/redis-server  /usr/local/redis-cluste/7001/redis.conf
/root/redis-5.0.5/src/redis-server  /usr/local/redis-cluste/7002/redis.conf
/root/redis-5.0.5/src/redis-server  /usr/local/redis-cluste/7003/redis.conf
/root/redis-5.0.5/src/redis-server  /usr/local/redis-cluste/7004/redis.conf
/root/redis-5.0.5/src/redis-server  /usr/local/redis-cluste/7005/redis.conf
/root/redis-5.0.5/src/redis-server  /usr/local/redis-cluste/7006/redis.conf

启动集群

/root/redis-5.0.5/src/redis-cli --cluster create 19.19.92.138:7001 19.19.92.138:7002 19.19.92.39:7003 19.19.92.39:7004 19.19.92.73:7005 19.19.92.73:7006 --cluster-replicas 1

有可能会出现以下提示

Could not connect to Redis at 19.19.92.39:7003: No route to host

是因为centos防火墙未放开相应端口,需要在每台节点上开放相应端口

firewall-cmd --zone=public --add-port=7001/tcp --permanent
firewall-cmd --zone=public --add-port=7001/tcp --permanent
firewall-cmd --reload
firewall-cmd --query-port=7001/tcp #检查端口是否开放

然后再启动集群
发现出现了卡在以下地方

/root/redis-5.0.5/src/redis-cli --cluster create 19.19.92.138:7001 19.19.92.138:7002 19.19.92.39:7003 19.19.92.39:7004 19.19.92.73:7005 19.19.92.73:7006 --cluster-replicas 1htop
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 19.19.92.39:7004 to 19.19.92.138:7001
Adding replica 19.19.92.73:7006 to 19.19.92.39:7003
Adding replica 19.19.92.138:7002 to 19.19.92.73:7005
M: c48fbeb728fc345ca2efa2db32253732c5c7540e 19.19.92.138:7001
   slots:[0-5460] (5461 slots) master
S: ecc7f6d30622e8518ee2696efafc285e81314a30 19.19.92.138:7002
   replicates 36cb98692804864254aeda28a069360f672e648e
M: 835b4521c598fbb7d6d2f0b52ee0a839846d4059 19.19.92.39:7003
   slots:[5461-10922] (5462 slots) master
S: 24c9f7ff5bf3961567ac4a227adbdbc68e2e316c 19.19.92.39:7004
   replicates c48fbeb728fc345ca2efa2db32253732c5c7540e
M: 36cb98692804864254aeda28a069360f672e648e 19.19.92.73:7005
   slots:[10923-16383] (5461 slots) master
S: f353dcc0e8f2e8d31a27b488fb7282b607883f8a 19.19.92.73:7006
   replicates 835b4521c598fbb7d6d2f0b52ee0a839846d4059
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join

...

因为redis集群不仅需要开通redis客户端连接的端口,而且需要开通集群总线端口
集群总线端口为redis客户端连接的端口 + 10000
如redis端口为6379
则集群总线端口为16379

此处我直接将防火墙关闭

systemctl stop firewalld
systemctl disable firewalld

再次启动集群会出现以下报错
[ERR] Node 19.19.92.138:7002 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.

需要进入每个节点执行以下redis命令

[root@localhost 7002]# /root/redis-5.0.5/src/redis-cli -h 19.19.92.138 -p 7002
19.19.92.138:7002> flushall
OK
19.19.92.138:7002> cluster reset
OK

然后启动集群一切正常

[root@localhost 7002]#/root/redis-5.0.5/src/redis-cli --cluster create 19.19.92.138:7001 19.19.92.138:7002 19.19.92.39:7003 19.19.92.39:7004 19.19.92.73:7005 19.19.92.73:7006 --cluster-replicas 1htop
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 19.19.92.39:7004 to 19.19.92.138:7001
Adding replica 19.19.92.73:7006 to 19.19.92.39:7003
Adding replica 19.19.92.138:7002 to 19.19.92.73:7005
M: c48fbeb728fc345ca2efa2db32253732c5c7540e 19.19.92.138:7001
   slots:[0-5460] (5461 slots) master
S: ecc7f6d30622e8518ee2696efafc285e81314a30 19.19.92.138:7002
   replicates 36cb98692804864254aeda28a069360f672e648e
M: 835b4521c598fbb7d6d2f0b52ee0a839846d4059 19.19.92.39:7003
  slots:[5461-10922] (5462 slots) master
S: 24c9f7ff5bf3961567ac4a227adbdbc68e2e316c 19.19.92.39:7004
   replicates c48fbeb728fc345ca2efa2db32253732c5c7540e
M: 36cb98692804864254aeda28a069360f672e648e 19.19.92.73:7005
   slots:[10923-16383] (5461 slots) master
S: f353dcc0e8f2e8d31a27b488fb7282b607883f8a 19.19.92.73:7006
   replicates 835b4521c598fbb7d6d2f0b52ee0a839846d4059
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 19.19.92.138:7001)
M: c48fbeb728fc345ca2efa2db32253732c5c7540e 19.19.92.138:7001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 36cb98692804864254aeda28a069360f672e648e 19.19.92.73:7005
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 835b4521c598fbb7d6d2f0b52ee0a839846d4059 19.19.92.39:7003
  slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 24c9f7ff5bf3961567ac4a227adbdbc68e2e316c 19.19.92.39:7004
  slots: (0 slots) slave
   replicates c48fbeb728fc345ca2efa2db32253732c5c7540e
S: f353dcc0e8f2e8d31a27b488fb7282b607883f8a 19.19.92.73:7006
   slots: (0 slots) slave
   replicates 835b4521c598fbb7d6d2f0b52ee0a839846d4059
S: ecc7f6d30622e8518ee2696efafc285e81314a30 19.19.92.138:7002
   slots: (0 slots) slave
   replicates 36cb98692804864254aeda28a069360f672e648e
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

验证集群

 [root@localhost 7002]# /root/redis-5.0.5/src/redis-cli -h 19.19.92.138 -p 7002
19.19.92.138:7002> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:5
cluster_stats_messages_ping_sent:128497
cluster_stats_messages_pong_sent:128400
cluster_stats_messages_meet_sent:6
cluster_stats_messages_sent:256903
cluster_stats_messages_ping_received:128399
cluster_stats_messages_pong_received:128503
cluster_stats_messages_meet_received:1
cluster_stats_messages_received:256903
19.19.92.138:7002> cluster nodes
835b4521c598fbb7d6d2f0b52ee0a839846d4059 19.19.92.39:7003@17003 master - 0 1560498552525 3 connected 5461-10922
36cb98692804864254aeda28a069360f672e648e 19.19.92.73:7005@17005 master - 0 1560498552000 5 connected 10923-16383
ecc7f6d30622e8518ee2696efafc285e81314a30 19.19.92.138:7002@17002 myself,slave 36cb98692804864254aeda28a069360f672e648e 0 1560498551000 2 connected
24c9f7ff5bf3961567ac4a227adbdbc68e2e316c 19.19.92.39:7004@17004 slave c48fbeb728fc345ca2efa2db32253732c5c7540e 0 1560498551523 4 connected
c48fbeb728fc345ca2efa2db32253732c5c7540e 19.19.92.138:7001@17001 master - 0 1560498551000 1 connected 0-5460
f353dcc0e8f2e8d31a27b488fb7282b607883f8a 19.19.92.73:7006@17006 slave 835b4521c598fbb7d6d2f0b52ee0a839846d4059 0 1560498552024 6 connected
19.19.92.138:7002>

相关文章

最新文章

更多