一、基本介绍
Redis集群至少需要3个主节点,要保证Redis的高可用性,那每个主节点至少需要一个从节点(如果没有从节点,那集群中的某个主节点挂掉了,那这个节点中的数据也就获取不到了),所以Redis集群就至少需要6个节点,3个主节点,3个从节点。
Redis集群的数据共享
Redis 集群使用数据分片(sharding)而非一致性哈希(consistency hashing)来实现: 一个 Redis 集群包含 16384 个哈希槽(hash slot), 数据库中的每个键都属于这 16384 个哈希槽的其中一个, 集群使用公式 CRC16(key) % 16384 来计算键 key 属于哪个槽, 其中CRC16(key) 语句用于计算键 key 的 CRC16 校验和 。
集群中的每个节点负责处理一部分哈希槽。 举个例子, 一个集群可以有三个哈希槽, 其中:
• 节点 A 负责处理 0 号至 5500 号哈希槽。
• 节点 B 负责处理 5501 号至 11000 号哈希槽。
• 节点 C 负责处理 11001 号至 16384 号哈希槽。
这种将哈希槽分布到不同节点的做法使得用户可以很容易地向集群中添加或者删除节点。 比如说:
• 如果用户将新节点 D 添加到集群中, 那么集群只需要将节点 A 、B 、 C 中的某些槽移动到节点 D 就可以了。
• 与此类似, 如果用户要从集群中移除节点 A , 那么集群只需要将节点 A 中的所有哈希槽移动到节点 B 和节点 C , 然后再移除空白(不包含任何哈希槽)的节点 A 就可以了。
因为将一个哈希槽从一个节点移动到另一个节点不会造成节点阻塞, 所以无论是添加新节点还是移除已存在节点, 又或者改变某个节点包含的哈希槽数量, 都不会造成集群下线。
所以每次往Redis中set值的时候,会根据Redis的计算规则算出这个这个值的哈希值,然后再把数据放到对应处理这个哈希值的节点上,获取数据的时候也是一样的道理。
二、安装
1、服务器规划
注意:端口的规划请事先确认好是否已经被占用,建议各个服务器对应端口的数值带有一定的规律性,便于安装和维护
2、在对应的服务器中打开对应的端口
切换至root用户
# vi /etc/sysconfig/iptables
192.168.31.143中增加:
## redis -A INPUT -m state --state NEW -m tcp -p tcp --dport 7111 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 17111 -j ACCEPT
192.168.31.103中增加:
## redis -A INPUT -m state --state NEW -m tcp -p tcp --dport 7112 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 17112 -j ACCEPT
192.168.31.154中增加:
## redis -A INPUT -m state --state NEW -m tcp -p tcp --dport 7113 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 17113 -j ACCEPT
192.168.31.117中增加:
## redis -A INPUT -m state --state NEW -m tcp -p tcp --dport 7114 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 17114 -j ACCEPT
192.168.31.146中增加:
## redis -A INPUT -m state --state NEW -m tcp -p tcp --dport 7115 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 17115 -j ACCEPT
192.168.31.173中增加:
## redis -A INPUT -m state --state NEW -m tcp -p tcp --dport 7116 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 17116 -j ACCEPT
在各个服务器增加完端口后,进行端口配置重启
# service iptables restart
下面对Redis进行安装,安装目录:/usr/local/redis3,安装用户:root
3、编译和安装所需要的包
# yum install gcc tcl
4、下载或者上传Redis3的最新稳定版本到/usr/local/src
# cd /usr/local/src
# wget http://download.redis.io/releases/redis-3.0.3.tar.gz
5、创建安装目录:
# mkdir /usr/local/redis3
6、解压并进入目录
# tar -zxvf redis-3.0.3.tar.gz
# cd redis-3.0.3
7、编译安装(使用PREFIX指定安装目录)
# make PREFIX=/usr/local/redis3 install
安装完成后,可以看到/usr/local/redis3目录下有一个bin目录,bin目录里面就是redis的命令脚本,如下:
redis-benchmark redis-check-aof redis-check-dump redis-cli redis-sentinel redis-server
以上步骤在每台机器上都要执行安装
8、创建集群配置目录,并拷贝redis.conf配置文件到各个节点配置目录
192.168.31.143
# mkdir -p /usr/local/redis3/cluster/7111
# cp /usr/local/src/redis3.0/redis.conf /usr/local/redis3/cluster/7111/redis-7111.conf
192.168.31.103
# mkdir -p /usr/local/redis3/cluster/7112
# cp /usr/local/src/redis-3.0.3/redis.conf /usr/local/redis3/cluster/7112/redis-7112.conf
192.168.31.154
# mkdir -p /usr/local/redis3/cluster/7113
# cp /usr/local/src/redis-3.0.3/redis.conf /usr/local/redis3/cluster/7113/redis-7113.conf
192.168.31.117
# mkdir -p /usr/local/redis3/cluster/7114
# cp /usr/local/src/redis-3.0.3/redis.conf /usr/local/redis3/cluster/7114/redis-7114.conf
192.168.31.146
# mkdir -p /usr/local/redis3/cluster/7115
# cp /usr/local/src/redis-3.0.3/redis.conf /usr/local/redis3/cluster/7115/redis-7115.conf
192.168.31.173
# mkdir -p /usr/local/redis3/cluster/7116
# cp /usr/local/src/redis-3.0.3/redis.conf /usr/local/redis3/cluster/7116/redis-7116.conf
9、修改6个节点的redis.conf配置文件内容
192.168.31.143
# /usr/local/redis3/bin/redis-server /usr/local/redis3/cluster/7111/redis-7111.conf
192.168.31.103
# /usr/local/redis3/bin/redis-server /usr/local/redis3/cluster/7112/redis-7112.conf
192.168.31.154
# /usr/local/redis3/bin/redis-server /usr/local/redis3/cluster/7113/redis-7113.conf
192.168.31.117
# /usr/local/redis3/bin/redis-server /usr/local/redis3/cluster/7114/redis-7114.conf
192.168.31.146
# /usr/local/redis3/bin/redis-server /usr/local/redis3/cluster/7115/redis-7115.conf
192.168.31.173
# /usr/local/redis3/bin/redis-server /usr/local/redis3/cluster/7116/redis-7116.conf
逐个启动后,可以用ps -ef|grep redis命令来检测是否启动成功
root 3273 1 3 15:18 ? 00:00:00 /usr/local/redis3/bin/redis-server *:7111 [cluster]
接下来准备创建集群
10、安装ruby和rubygems(注意:需要ruby的版本在1.8.7及以上)
# yum install ruby rubygems
查看ruby的版本信息
# ruby -v
ruby 1.8.7 (2013-06-27 patchlevel 374) [x86_64-linux]
11、gem安装redis ruby接口
# gem install redis
出现如下信息:
Successfully installed redis-3.3.1
1 gem installed
Installing ri documentation for redis-3.3.1...
Installing RDoc documentation for redis-3.3.1...
12、执行Redis集群创建命令(只需要在其中一个节点上执行即可)
拷贝集群创建文件:
# cp /usr/local/src/redis-3.0.3/src/redis-trib.rb /usr/local/bin/redis-trib
执行集群创建脚本:
# redis-trib create --replicas 1 192.168.31.117:7114 192.168.31.146:7115 192.168.31.173:7116 192.168.31.143:7111 192.168.31.103:7112 192.168.31.154:7113
集群创建过程说明:
给定redis-trib程序的命令是create,这表示我们希望创建一个新的集群,--replicas 1表示每个主节点下面有一个从节点,后面的参数则是redis实例的地址列表,程序使用这些地址所指示的实例来创建新的集群。
在第一次执行的时候,输出信息如下:
<pre>
Creating cluster
Connecting to node 192.168.31.117:7114: OK
Connecting to node 192.168.31.146:7115: OK
Connecting to node 192.168.31.173:7116: OK
Connecting to node 192.168.31.143:7111: OK
Connecting to node 192.168.31.103:7112: OK
Connecting to node 192.168.31.154:7113: OK
Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.31.154:7113
192.168.31.173:7116
192.168.31.143:7111
Adding replica 192.168.31.146:7115 to 192.168.31.154:7113
Adding replica 192.168.31.117:7114 to 192.168.31.173:7116
Adding replica 192.168.31.103:7112 to 192.168.31.143:7111
S: 26a53ca5649a85d75d7f78c17897846fad8548c3 192.168.31.117:7114
replicates 3dc26de1ddb9304dc3451e7044c91b3fd5cb77b9
S: 51dfb182af4fdc33c69218fe6f8421c0311f67f0 192.168.31.146:7115
replicates e9b5cd667523705b7f4052dd847a45c9abd4ff2e
M: 3dc26de1ddb9304dc3451e7044c91b3fd5cb77b9 192.168.31.173:7116
slots:5461-10922 (5462 slots) master
M: 45d00776095446fcf7e194a53e50311da4e2c87e 192.168.31.143:7111
slots:10923-16383 (5461 slots) master
S: aa1577c0295783245870e114ffdcabd0ee9bfd07 192.168.31.103:7112
replicates 45d00776095446fcf7e194a53e50311da4e2c87e
M: e9b5cd667523705b7f4052dd847a45c9abd4ff2e 192.168.31.154:7113
slots:0-5460 (5461 slots) master
Can I set the above configuration? (type 'yes' to accept):
</pre>
从以上信息可以看到master节点为7111、7113、7116,这和我们预先计划的不一致,预先计划的master节点是7111、7112、7113,所以在确认这一步,我们输入no,然后再执行上面的脚本,在试过几次后终于得到了我们预想的设置(好像是随机的,没有规律,这一步要注意核对主从关系是否是你所预想的)
<pre>
Creating cluster
Connecting to node 192.168.31.117:7114: OK
Connecting to node 192.168.31.146:7115: OK
Connecting to node 192.168.31.173:7116: OK
Connecting to node 192.168.31.143:7111: OK
Connecting to node 192.168.31.103:7112: OK
Connecting to node 192.168.31.154:7113: OK
Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.31.103:7112
192.168.31.154:7113
192.168.31.143:7111
Adding replica 192.168.31.146:7115 to 192.168.31.103:7112
Adding replica 192.168.31.173:7116 to 192.168.31.154:7113
Adding replica 192.168.31.117:7114 to 192.168.31.143:7111
S: 26a53ca5649a85d75d7f78c17897846fad8548c3 192.168.31.117:7114
replicates 45d00776095446fcf7e194a53e50311da4e2c87e
S: 51dfb182af4fdc33c69218fe6f8421c0311f67f0 192.168.31.146:7115
replicates aa1577c0295783245870e114ffdcabd0ee9bfd07
S: 3dc26de1ddb9304dc3451e7044c91b3fd5cb77b9 192.168.31.173:7116
replicates e9b5cd667523705b7f4052dd847a45c9abd4ff2e
M: 45d00776095446fcf7e194a53e50311da4e2c87e 192.168.31.143:7111
slots:10923-16383 (5461 slots) master
M: aa1577c0295783245870e114ffdcabd0ee9bfd07 192.168.31.103:7112
slots:0-5460 (5461 slots) master
M: e9b5cd667523705b7f4052dd847a45c9abd4ff2e 192.168.31.154:7113
slots:5461-10922 (5462 slots) master
Can I set the above configuration? (type 'yes' to accept):
</pre>
输入yes,集群就会将配置应用到各个节点,并join各个节点,让各个节点开始通讯,输出如下信息
<pre>
Nodes configuration updated
Assign a different config epoch to each node
Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....
Performing Cluster Check (using node 192.168.31.117:7114)
M: 26a53ca5649a85d75d7f78c17897846fad8548c3 192.168.31.117:7114
slots: (0 slots) master
replicates 45d00776095446fcf7e194a53e50311da4e2c87e
M: 51dfb182af4fdc33c69218fe6f8421c0311f67f0 192.168.31.146:7115
slots: (0 slots) master
replicates aa1577c0295783245870e114ffdcabd0ee9bfd07
M: 3dc26de1ddb9304dc3451e7044c91b3fd5cb77b9 192.168.31.173:7116
slots: (0 slots) master
replicates e9b5cd667523705b7f4052dd847a45c9abd4ff2e
M: 45d00776095446fcf7e194a53e50311da4e2c87e 192.168.31.143:7111
slots:10923-16383 (5461 slots) master
M: aa1577c0295783245870e114ffdcabd0ee9bfd07 192.168.31.103:7112
slots:0-5460 (5461 slots) master
M: e9b5cd667523705b7f4052dd847a45c9abd4ff2e 192.168.31.154:7113
slots:5461-10922 (5462 slots) master
[OK] All nodes agree about slots configuration.
Check for open slots...
Check slots coverage...
[OK] All 16384 slots covered.
</pre>
可以看出:
192.168.31.117:7114从节点所属的主节点为192.168.31.143:7111
192.168.31.146:7115从节点所属的主节点为192.168.31.103:7112
192.168.31.173:7116从节点所属的主节点为192.168.31.154:7113(通过replicates 后面的值去匹配,相当于外键ID)
192.168.31.103:7112分配的槽点是是0-5460段
Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....
>>> Performing Cluster Check (using node 192.168.31.117:7114)
M: 26a53ca5649a85d75d7f78c17897846fad8548c3 192.168.31.117:7114
slots: (0 slots) master
replicates 45d00776095446fcf7e194a53e50311da4e2c87e
M: 51dfb182af4fdc33c69218fe6f8421c0311f67f0 192.168.31.146:7115
slots: (0 slots) master
replicates aa1577c0295783245870e114ffdcabd0ee9bfd07
M: 3dc26de1ddb9304dc3451e7044c91b3fd5cb77b9 192.168.31.173:7116
slots: (0 slots) master
replicates e9b5cd667523705b7f4052dd847a45c9abd4ff2e
M: 45d00776095446fcf7e194a53e50311da4e2c87e 192.168.31.143:7111
slots:10923-16383 (5461 slots) master
M: aa1577c0295783245870e114ffdcabd0ee9bfd07 192.168.31.103:7112
slots:0-5460 (5461 slots) master
M: e9b5cd667523705b7f4052dd847a45c9abd4ff2e 192.168.31.154:7113
slots:5461-10922 (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
</pre>
可以看出:
192.168.31.117:7114从节点所属的主节点为192.168.31.143:7111
192.168.31.146:7115从节点所属的主节点为192.168.31.103:7112
192.168.31.173:7116从节点所属的主节点为192.168.31.154:7113(通过replicates 后面的值去匹配,相当于外键ID)
192.168.31.103:7112分配的槽点是是0-5460段
192.168.31.154:7113分配的槽点是5461-10922段
192.168.31.143:7111分配的槽点是10923-16383段
(集群中的每个节点负责处理一部分哈希槽)
最后一行信息表示集群中的16384个槽都有至少一个主节点在处理,集群运行正常
13、查看redis集群的从属状态关系
# /usr/local/redis3/bin/redis-cli -p 7111 cluster nodes
输出:
<pre>
26a53ca5649a85d75d7f78c17897846fad8548c3 192.168.31.117:7114 slave 45d00776095446fcf7e194a53e50311da4e2c87e 0 1471133132198 4 connected
3dc26de1ddb9304dc3451e7044c91b3fd5cb77b9 192.168.31.173:7116 slave e9b5cd667523705b7f4052dd847a45c9abd4ff2e 0 1471133130178 6 connected
aa1577c0295783245870e114ffdcabd0ee9bfd07 192.168.31.103:7112 master - 0 1471133133207 5 connected 0-5460
51dfb182af4fdc33c69218fe6f8421c0311f67f0 192.168.31.146:7115 slave aa1577c0295783245870e114ffdcabd0ee9bfd07 0 1471133134217 5 connected
45d00776095446fcf7e194a53e50311da4e2c87e 192.168.31.143:7111 myself,master - 0 0 4 connected 10923-16383
e9b5cd667523705b7f4052dd847a45c9abd4ff2e 192.168.31.154:7113 master - 0 1471133131189 6 connected 5461-10922
</pre>
14、集群的简单测试
随便挑一个节点,启动redis终端
# /usr/local/redis3/bin/redis-cli -c -p 7114
<pre>127.0.0.1:7114> set key1 dreyer
-> Redirected to slot [9189] located at 192.168.31.154:7113
OK
192.168.31.154:7113> get key1
"dreyer"
192.168.31.154:7113>
</pre>
可以看到set的key重定向192.168.31.154:711这个节点,因为redis集群会有一个使用公式CRC16(key) % 16384来计算key属于哪个槽
我们切换到192.168.31.154:7113这个节点上来获取刚刚设置key的值,因为设置的key值就落在这个节点上,所以能直接获取到数据
# /usr/local/redis3/bin/redis-cli -c -p 7113
<pre>127.0.0.1:7113> get key1
"dreyer"
</pre>
我们再切换到192.168.31.143:7111这个节点上来获取刚刚设置的key值
# /usr/local/redis3/bin/redis-cli -c -p 7111
<pre>
127.0.0.1:7111> get key1
-> Redirected to slot [9189] located at 192.168.31.154:7113
"dreyer"
</pre>
可以看到这7111这个节点获取数据的时候,会重定向至192.168.31.154:7113节点上获取数据
至此,完成!