前言:继上一篇redis cluster环境的搭建,个人觉得有必要记录一下集群的一些日常操作命令,因此特意实战操作并记录过程
本地redis集群构成(192.168.3.104上有7001、7002;192.168.3.105上有7003、7004;192.168.3.106上有7005、7006)
1、查看集群情况:redis-trib.rb check [ip]:[端口]
这个命令后的IP和端口可以任意指定集群中一个redis节点的IP和端口
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2、添加主节点: redis-trib.rb add-node 192.168.3.107:7007 192.168.13.105:7003
注释:
192.168.3.107:7007 是新增的节点
192.168.13.105:7003集群任一个旧节点
[root@eshop-cache02 init.d]# redis-trib.rb add-node 192.168.3.107:7007 192.168.3.105:7003
>>> Adding node 192.168.3.107:7007 to cluster 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.3.107:7007 to make it join the cluster.
[OK] New node added correctly.
再查看一下集群情况,发现已经多了一个7007的master节点了,但是此时7007只有0个slots
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
slots: (0 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3、添加从节点:redis-trib.rb add-node --slave --master-id fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7008 192.168.3.105:7003
注释:
--slave,表示添加的是从节点
--master-id fb348bf965eeb8fbf585923130fcd36237d54c6d,主节点的node id,在这里是前面新添加的7007的node id
192.168.3.107:7008,新节点
192.168.3.105:7003集群任一个旧节点
[root@eshop-cache02 init.d]# redis-trib.rb add-node --slave --master-id fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7008 192.168.3.105:7003
>>> Adding node 192.168.3.107:7008 to cluster 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
slots: (0 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.3.107:7008 to make it join the cluster.
Waiting for the cluster to join.....
>>> Configure node as replica of 192.168.3.107:7007.
[OK] New node added correctly.
再查看一下集群情况,发现已经多了一个7008的slave节点了
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: b5687d801ebdef7c8057ff4a4f257d32d3e022e4 192.168.3.107:7008
slots: (0 slots) slave
replicates fb348bf965eeb8fbf585923130fcd36237d54c6d
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
slots: (0 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
4、重新分配slot:redis-trib.rb reshard 192.168.3.107:7007
[root@eshop-cache02 init.d]# redis-trib.rb reshard 192.168.3.107:7007
>>> Performing Cluster Check (using node 192.168.3.107:7007)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
slots: (0 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: b5687d801ebdef7c8057ff4a4f257d32d3e022e4 192.168.3.107:7008
slots: (0 slots) slave
replicates fb348bf965eeb8fbf585923130fcd36237d54c6d
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1000 //设置slot数1000
What is the receiving node ID? fb348bf965eeb8fbf585923130fcd36237d54c6d //新增的master节点node id
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:all //表示全部节点重新洗牌 (或者一个个的输入要分配的节点ID)
****
中间省略
****
Do you want to proceed with the proposed reshard plan (yes/no)? yes //确认重新分
****
中间省略
****
Moving slot 330 from 192.168.3.105:7004 to 192.168.3.107:7007:
Moving slot 331 from 192.168.3.105:7004 to 192.168.3.107:7007:
Moving slot 332 from 192.168.3.105:7004 to 192.168.3.107:7007:
[root@eshop-cache02 init.d]#
完成重新分配slots
此时再查看一下集群信息,发现7007已经有1000个slots了,其它主节点平均分配剩下的slot
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5795-10922 (5128 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: b5687d801ebdef7c8057ff4a4f257d32d3e022e4 192.168.3.107:7008
slots: (0 slots) slave
replicates fb348bf965eeb8fbf585923130fcd36237d54c6d
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:333-5460 (5128 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:11256-16383 (5128 slots) master
1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
slots:0-332,5461-5794,10923-11255 (1000 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
5、删除从节点:redis-trib.rb del-node [IP]:[端口]'节点ID'
注释:
IP端口为要删掉的从节点的IP端口,节点ID为从节点的节点ID
[root@eshop-cache02 init.d]# redis-trib.rb del-node 192.168.3.107:7008 'b5687d801ebdef7c8057ff4a4f257d32d3e022e4'
>>> Removing node b5687d801ebdef7c8057ff4a4f257d32d3e022e4 from cluster 192.168.3.107:7008
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
删除完成,我们再查看一下集群信息,发现已经没有7008节点信息了
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5795-10922 (5128 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:333-5460 (5128 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:11256-16383 (5128 slots) master
1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
slots:0-332,5461-5794,10923-11255 (1000 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
6、删除主节点:redis-trib.rb reshard [IP]:[端口]
如果主节点有从节点,将从节点转移到其他主节点
如果主节点有slot,去掉分配的slot,然后再删除主节点
[root@eshop-cache02 init.d]# redis-trib.rb reshard 192.168.3.107:7007
>>> Performing Cluster Check (using node 192.168.3.107:7007)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
slots:0-332,5461-5794,10923-11255 (1000 slots) master
0 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:11256-16383 (5128 slots) master
1 additional replica(s)
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:5795-10922 (5128 slots) master
1 additional replica(s)
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:333-5460 (5128 slots) master
1 additional replica(s)
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1000
What is the receiving node ID? 29fbdff232cba71ae300fd8900e8e391d8455658
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:fb348bf965eeb8fbf585923130fcd36237d54c6d
Source node #2:done
****************中间省略***********************
Moving slot 11255 from fb348bf965eeb8fbf585923130fcd36237d54c6d
Do you want to proceed with the proposed reshard plan (yes/no)? yes
****************中间省略***********************
Moving slot 11253 from 192.168.3.107:7007 to 192.168.3.105:7003:
Moving slot 11254 from 192.168.3.107:7007 to 192.168.3.105:7003:
Moving slot 11255 from 192.168.3.107:7007 to 192.168.3.105:7003:
此时完成7007节点的slot的取消,和7003节点的节点重新分配,其实就是把7007之前的1000个slot划分给7003了,此时查看一下集群信息:
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:0-332,5461-11255 (6128 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:333-5460 (5128 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:11256-16383 (5128 slots) master
1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
slots: (0 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
此时发现7007只是没有了slot,但仍然存在,我们需要再把它删掉:
redis-trib.rb del-node [ip]:端口 '节点ID' ;注意:IP、端口、节点ID都是对应要被删除的节点的
[root@eshop-cache02 init.d]# redis-trib.rb del-node 192.168.3.107:7007 'fb348bf965eeb8fbf585923130fcd36237d54c6d'
>>> Removing node fb348bf965eeb8fbf585923130fcd36237d54c6d from cluster 192.168.3.107:7007
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
此时再查看集群信息,发现主节点7007已经被删除了
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
slots:0-332,5461-11255 (6128 slots) master
1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
slots: (0 slots) slave
replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
slots: (0 slots) slave
replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
slots: (0 slots) slave
replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
slots:333-5460 (5128 slots) master
1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
slots:11256-16383 (5128 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
7、把一个从节点换到另外一个主节点上
7.1、查看一下7005的从节点:redis-cli -h [主节点IP] -p [主节点端口] cluster nodes | grep slave | grep [主节点ID]
注:IP、端口、节点IP都是要查看的主节点的信息
[root@eshop-cache02 init.d]# redis-cli -h 192.168.3.106 -p 7005 cluster nodes | grep slave | grep be88db01183aa149949ee8abbb92633081082a7f
40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006 slave be88db01183aa149949ee8abbb92633081082a7f 0 1558886624370 6 connected
发现7005下有一个7006的从节点
7.2、将7006加到新的主节点7003上
[root@eshop-cache02 init.d]# redis-cli -c -h 192.168.3.106 -p 7006 //先已-c的方式登录7006从节点
192.168.3.106:7006> cluster replicate 29fbdff232cba71ae300fd8900e8e391d8455658 //后面的为新主节点的ID
OK
192.168.3.106:7006> exit //退出登录
7.3、查看一下7005、7003的从节点(发现7005已经没有从节点了,7003有两个从节点)
[root@eshop-cache02 init.d]# redis-cli -h 192.168.3.106 -p 7005 cluster nodes | grep slave | grep be88db01183aa149949ee8abbb92633081082a7f
[root@eshop-cache02 init.d]# redis-cli -h 192.168.3.105 -p 7003 cluster nodes | grep slave | grep 29fbdff232cba71ae300fd8900e8e391d8455658
1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002 slave 29fbdff232cba71ae300fd8900e8e391d8455658 0 1558887034628 10 connected
40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006 slave 29fbdff232cba71ae300fd8900e8e391d8455658 0 1558887033624 10 connected
8、Redis缓存清理
8.1、登录redis
[root@eshop-cache02 init.d]# redis-cli -h 192.168.3.105 -p 7003
192.168.3.105:7003>
8.2、执行dbsize命令
192.168.3.105:7004> dbsize
(integer) 2
8.3、执行flushall命令
192.168.3.105:7004> flushall
OK
8.4、查看所有key值:keys *
192.168.3.105:7004> keys *
1) "k3"
2) "k2"
8.5、删除指定索引的值 del key值
192.168.3.105:7004> del k2
(integer) 1
9、升级节点:
升级从服务器节点很简单,因为你只需要停止节点然后用已更新的Redis版本重启。如果有客户端使用从服务器节点分离读请求,它们应该能够在某个节点
不可用时重新连接另一个从服务器。
升级主服务器要稍微复杂一些,建议的步骤是:
1)使用cluster failover来触发一次手工故障转移主服务器(请看本文档的手工故障转移小节)。
2)等待主服务器变为从服务器。
3)像升级从服务器那样升级这个节点。
4)如果你想让你刚刚升级的节点成为主服务器,触发一次新的手工故障转移,让升级的节点重新变回主服务器。
可以按照这些步骤来一个节点一个节点的升级,直到全部节点升级完毕。
10、一些常见的需要登录到集群中某个节点后才能执行的命令
cluster info :打印集群的信息
cluster nodes :列出集群当前已知的所有节点( node),以及这些节点的相关信息。
节点
cluster meet <ip> <port> :将 ip 和 port 所指定的节点添加到集群当中,让它成为集群的一份子。
cluster forget <node_id> :从集群中移除 node_id 指定的节点。
cluster replicate <master_node_id> :将当前从节点设置为 node_id 指定的master节点的slave节点。只能针对slave节点操作。
cluster saveconfig :将节点的配置文件保存到硬盘里面。
槽(slot)
cluster addslots <slot> [slot ...] :将一个或多个槽( slot)指派( assign)给当前节点。
cluster delslots <slot> [slot ...] :移除一个或多个槽对当前节点的指派。
cluster flushslots :移除指派给当前节点的所有槽,让当前节点变成一个没有指派任何槽的节点。
cluster setslot <slot> node <node_id> :将槽 slot 指派给 node_id 指定的节点,如果槽已经指派给
另一个节点,那么先让另一个节点删除该槽>,然后再进行指派。
cluster setslot <slot> migrating <node_id> :将本节点的槽 slot 迁移到 node_id 指定的节点中。
cluster setslot <slot> importing <node_id> :从 node_id 指定的节点中导入槽 slot 到本节点。
cluster setslot <slot> stable :取消对槽 slot 的导入( import)或者迁移( migrate)。
键
cluster keyslot <key> :计算键 key 应该被放置在哪个槽上。
cluster countkeysinslot <slot> :返回槽 slot 目前包含的键值对数量。
cluster getkeysinslot <slot> <count> :返回 count 个 slot 槽中的键 。
11、redis cluster的自动化slave迁移实现更强的高可用架构的部署方案
因为slave有自动迁移的特性,即如果某个master的slave挂了,那么redis cluster会自动迁移一个冗余的slave给那个master(一个master主节点超过一个slave节点,多的节点都是冗余节点,可以自动迁移)
为了避免每个master只有一个slave,万一说一个slave死了,然后很快,master也死了,可用性就降低了,我们可以给整个集群挂载了一些冗余slave,那么某个master的slave死了,冗余的slave会被自动迁移过去,作为master的新slave,此时即使那个master也死了,还是有一个slave会切换成master的