linux 的环境搭建(三)--redis cluster的日常操作

前言:继上一篇redis cluster环境的搭建,个人觉得有必要记录一下集群的一些日常操作命令,因此特意实战操作并记录过程

本地redis集群构成(192.168.3.104上有7001、7002;192.168.3.105上有7003、7004;192.168.3.106上有7005、7006)

1、查看集群情况:redis-trib.rb check [ip]:[端口]
这个命令后的IP和端口可以任意指定集群中一个redis节点的IP和端口

[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

2、添加主节点: redis-trib.rb add-node 192.168.3.107:7007 192.168.13.105:7003
注释:
192.168.3.107:7007 是新增的节点
192.168.13.105:7003集群任一个旧节点

[root@eshop-cache02 init.d]# redis-trib.rb add-node 192.168.3.107:7007 192.168.3.105:7003
>>> Adding node 192.168.3.107:7007 to cluster 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.3.107:7007 to make it join the cluster.
[OK] New node added correctly.

再查看一下集群情况,发现已经多了一个7007的master节点了,但是此时7007只有0个slots
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
   slots: (0 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

3、添加从节点:redis-trib.rb add-node --slave --master-id fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7008 192.168.3.105:7003
注释:
--slave,表示添加的是从节点
--master-id fb348bf965eeb8fbf585923130fcd36237d54c6d,主节点的node id,在这里是前面新添加的7007的node id
192.168.3.107:7008,新节点
192.168.3.105:7003集群任一个旧节点

[root@eshop-cache02 init.d]# redis-trib.rb add-node --slave --master-id fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7008 192.168.3.105:7003
>>> Adding node 192.168.3.107:7008 to cluster 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
   slots: (0 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.3.107:7008 to make it join the cluster.
Waiting for the cluster to join.....
>>> Configure node as replica of 192.168.3.107:7007.
[OK] New node added correctly.

再查看一下集群情况,发现已经多了一个7008的slave节点了
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: b5687d801ebdef7c8057ff4a4f257d32d3e022e4 192.168.3.107:7008
   slots: (0 slots) slave
   replicates fb348bf965eeb8fbf585923130fcd36237d54c6d
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
   slots: (0 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

4、重新分配slot:redis-trib.rb reshard 192.168.3.107:7007

[root@eshop-cache02 init.d]# redis-trib.rb reshard 192.168.3.107:7007
>>> Performing Cluster Check (using node 192.168.3.107:7007)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
   slots: (0 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: b5687d801ebdef7c8057ff4a4f257d32d3e022e4 192.168.3.107:7008
   slots: (0 slots) slave
   replicates fb348bf965eeb8fbf585923130fcd36237d54c6d
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1000   //设置slot数1000 
What is the receiving node ID? fb348bf965eeb8fbf585923130fcd36237d54c6d  //新增的master节点node id
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:all  //表示全部节点重新洗牌 (或者一个个的输入要分配的节点ID)
****
中间省略
****
Do you want to proceed with the proposed reshard plan (yes/no)? yes //确认重新分
****
中间省略
****
Moving slot 330 from 192.168.3.105:7004 to 192.168.3.107:7007: 
Moving slot 331 from 192.168.3.105:7004 to 192.168.3.107:7007: 
Moving slot 332 from 192.168.3.105:7004 to 192.168.3.107:7007: 
[root@eshop-cache02 init.d]# 

完成重新分配slots

此时再查看一下集群信息,发现7007已经有1000个slots了,其它主节点平均分配剩下的slot
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5795-10922 (5128 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: b5687d801ebdef7c8057ff4a4f257d32d3e022e4 192.168.3.107:7008
   slots: (0 slots) slave
   replicates fb348bf965eeb8fbf585923130fcd36237d54c6d
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:333-5460 (5128 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:11256-16383 (5128 slots) master
   1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
   slots:0-332,5461-5794,10923-11255 (1000 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

5、删除从节点:redis-trib.rb del-node [IP]:[端口]'节点ID'
注释:
IP端口为要删掉的从节点的IP端口,节点ID为从节点的节点ID

[root@eshop-cache02 init.d]# redis-trib.rb del-node 192.168.3.107:7008 'b5687d801ebdef7c8057ff4a4f257d32d3e022e4'
>>> Removing node b5687d801ebdef7c8057ff4a4f257d32d3e022e4 from cluster 192.168.3.107:7008
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

删除完成,我们再查看一下集群信息,发现已经没有7008节点信息了
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5795-10922 (5128 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:333-5460 (5128 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:11256-16383 (5128 slots) master
   1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
   slots:0-332,5461-5794,10923-11255 (1000 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

6、删除主节点:redis-trib.rb reshard [IP]:[端口]
如果主节点有从节点,将从节点转移到其他主节点
如果主节点有slot,去掉分配的slot,然后再删除主节点

[root@eshop-cache02 init.d]# redis-trib.rb reshard 192.168.3.107:7007
>>> Performing Cluster Check (using node 192.168.3.107:7007)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
   slots:0-332,5461-5794,10923-11255 (1000 slots) master
   0 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:11256-16383 (5128 slots) master
   1 additional replica(s)
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:5795-10922 (5128 slots) master
   1 additional replica(s)
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:333-5460 (5128 slots) master
   1 additional replica(s)
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1000
What is the receiving node ID? 29fbdff232cba71ae300fd8900e8e391d8455658
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:fb348bf965eeb8fbf585923130fcd36237d54c6d
Source node #2:done
****************中间省略***********************
Moving slot 11255 from fb348bf965eeb8fbf585923130fcd36237d54c6d
Do you want to proceed with the proposed reshard plan (yes/no)? yes
****************中间省略***********************
Moving slot 11253 from 192.168.3.107:7007 to 192.168.3.105:7003: 
Moving slot 11254 from 192.168.3.107:7007 to 192.168.3.105:7003: 
Moving slot 11255 from 192.168.3.107:7007 to 192.168.3.105:7003: 

此时完成7007节点的slot的取消,和7003节点的节点重新分配,其实就是把7007之前的1000个slot划分给7003了,此时查看一下集群信息:
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:0-332,5461-11255 (6128 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:333-5460 (5128 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:11256-16383 (5128 slots) master
   1 additional replica(s)
M: fb348bf965eeb8fbf585923130fcd36237d54c6d 192.168.3.107:7007
   slots: (0 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

此时发现7007只是没有了slot,但仍然存在,我们需要再把它删掉:
redis-trib.rb del-node [ip]:端口 '节点ID'   ;注意:IP、端口、节点ID都是对应要被删除的节点的
[root@eshop-cache02 init.d]#  redis-trib.rb del-node 192.168.3.107:7007 'fb348bf965eeb8fbf585923130fcd36237d54c6d'
>>> Removing node fb348bf965eeb8fbf585923130fcd36237d54c6d from cluster 192.168.3.107:7007
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

此时再查看集群信息,发现主节点7007已经被删除了
[root@eshop-cache02 init.d]# redis-trib.rb check 192.168.3.105:7003
>>> Performing Cluster Check (using node 192.168.3.105:7003)
M: 29fbdff232cba71ae300fd8900e8e391d8455658 192.168.3.105:7003
   slots:0-332,5461-11255 (6128 slots) master
   1 additional replica(s)
S: 1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002
   slots: (0 slots) slave
   replicates 29fbdff232cba71ae300fd8900e8e391d8455658
S: 95ced0a89c8e264957a5741fceed6fc2ff9160dc 192.168.3.104:7001
   slots: (0 slots) slave
   replicates f020b3cd13880d6b45bde073884d625181da5ada
S: 40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006
   slots: (0 slots) slave
   replicates be88db01183aa149949ee8abbb92633081082a7f
M: f020b3cd13880d6b45bde073884d625181da5ada 192.168.3.105:7004
   slots:333-5460 (5128 slots) master
   1 additional replica(s)
M: be88db01183aa149949ee8abbb92633081082a7f 192.168.3.106:7005
   slots:11256-16383 (5128 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

7、把一个从节点换到另外一个主节点上

7.1、查看一下7005的从节点:redis-cli -h [主节点IP] -p [主节点端口] cluster nodes | grep slave | grep [主节点ID]
注:IP、端口、节点IP都是要查看的主节点的信息
[root@eshop-cache02 init.d]# redis-cli -h 192.168.3.106 -p 7005 cluster nodes | grep slave | grep be88db01183aa149949ee8abbb92633081082a7f
40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006 slave be88db01183aa149949ee8abbb92633081082a7f 0 1558886624370 6 connected
发现7005下有一个7006的从节点

7.2、将7006加到新的主节点7003上
[root@eshop-cache02 init.d]# redis-cli -c -h 192.168.3.106 -p 7006   //先已-c的方式登录7006从节点
192.168.3.106:7006> cluster replicate 29fbdff232cba71ae300fd8900e8e391d8455658  //后面的为新主节点的ID
OK
192.168.3.106:7006> exit //退出登录

7.3、查看一下7005、7003的从节点(发现7005已经没有从节点了,7003有两个从节点)
[root@eshop-cache02 init.d]# redis-cli -h 192.168.3.106 -p 7005 cluster nodes | grep slave | grep be88db01183aa149949ee8abbb92633081082a7f
[root@eshop-cache02 init.d]# redis-cli -h 192.168.3.105 -p 7003 cluster nodes | grep slave | grep 29fbdff232cba71ae300fd8900e8e391d8455658
1db5527a96364e6a73a5584e2f2b8c44bb06921d 192.168.3.104:7002 slave 29fbdff232cba71ae300fd8900e8e391d8455658 0 1558887034628 10 connected
40ecf79b7a802e12d4e32c7e11c63e00ddb7adac 192.168.3.106:7006 slave 29fbdff232cba71ae300fd8900e8e391d8455658 0 1558887033624 10 connected

8、Redis缓存清理

8.1、登录redis
[root@eshop-cache02 init.d]# redis-cli -h 192.168.3.105 -p 7003
192.168.3.105:7003> 
8.2、执行dbsize命令
192.168.3.105:7004> dbsize
(integer) 2
8.3、执行flushall命令
192.168.3.105:7004> flushall
OK
8.4、查看所有key值:keys *
192.168.3.105:7004> keys *
1) "k3"
2) "k2"
8.5、删除指定索引的值 del key值
192.168.3.105:7004> del k2
(integer) 1

9、升级节点:

升级从服务器节点很简单,因为你只需要停止节点然后用已更新的Redis版本重启。如果有客户端使用从服务器节点分离读请求,它们应该能够在某个节点
不可用时重新连接另一个从服务器。
     
升级主服务器要稍微复杂一些,建议的步骤是:
1)使用cluster failover来触发一次手工故障转移主服务器(请看本文档的手工故障转移小节)。
2)等待主服务器变为从服务器。
3)像升级从服务器那样升级这个节点。
4)如果你想让你刚刚升级的节点成为主服务器,触发一次新的手工故障转移,让升级的节点重新变回主服务器。
 
可以按照这些步骤来一个节点一个节点的升级,直到全部节点升级完毕。

10、一些常见的需要登录到集群中某个节点后才能执行的命令

cluster info :打印集群的信息
cluster nodes :列出集群当前已知的所有节点( node),以及这些节点的相关信息。
节点
cluster meet <ip> <port> :将 ip 和 port 所指定的节点添加到集群当中,让它成为集群的一份子。
cluster forget <node_id> :从集群中移除 node_id 指定的节点。
cluster replicate <master_node_id> :将当前从节点设置为 node_id 指定的master节点的slave节点。只能针对slave节点操作。
cluster saveconfig :将节点的配置文件保存到硬盘里面。
槽(slot)
cluster addslots <slot> [slot ...] :将一个或多个槽( slot)指派( assign)给当前节点。
cluster delslots <slot> [slot ...] :移除一个或多个槽对当前节点的指派。
cluster flushslots :移除指派给当前节点的所有槽,让当前节点变成一个没有指派任何槽的节点。
cluster setslot <slot> node <node_id> :将槽 slot 指派给 node_id 指定的节点,如果槽已经指派给
另一个节点,那么先让另一个节点删除该槽>,然后再进行指派。
cluster setslot <slot> migrating <node_id> :将本节点的槽 slot 迁移到 node_id 指定的节点中。
cluster setslot <slot> importing <node_id> :从 node_id 指定的节点中导入槽 slot 到本节点。
cluster setslot <slot> stable :取消对槽 slot 的导入( import)或者迁移( migrate)。
键
cluster keyslot <key> :计算键 key 应该被放置在哪个槽上。
cluster countkeysinslot <slot> :返回槽 slot 目前包含的键值对数量。
cluster getkeysinslot <slot> <count> :返回 count 个 slot 槽中的键 。

11、redis cluster的自动化slave迁移实现更强的高可用架构的部署方案

因为slave有自动迁移的特性,即如果某个master的slave挂了,那么redis cluster会自动迁移一个冗余的slave给那个master(一个master主节点超过一个slave节点,多的节点都是冗余节点,可以自动迁移)

为了避免每个master只有一个slave,万一说一个slave死了,然后很快,master也死了,可用性就降低了,我们可以给整个集群挂载了一些冗余slave,那么某个master的slave死了,冗余的slave会被自动迁移过去,作为master的新slave,此时即使那个master也死了,还是有一个slave会切换成master的

此文参考博客:https://www.cnblogs.com/kevingrace/p/7910692.html

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,271评论 5 476
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,275评论 2 380
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,151评论 0 336
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,550评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,553评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,559评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,924评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,580评论 0 257
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,826评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,578评论 2 320
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,661评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,363评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,940评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,926评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,156评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,872评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,391评论 2 342

推荐阅读更多精彩内容