Building Cluster with redis-trib.rb
redis-trib.rb is a Ruby-based redis cluster management tool.
Common operations such as cluster creation, checking, slot Joey and balancing are simplified by Cluster related commands.
Ruby dependency environment needs to be installed before use
1. Ruby environment preparation
https://cache.ruby-lang.org/pub/ruby/2.3/[Download Web Site] -- download ruby --- cd /soft/tools wget https://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.4.tar.gz -- install ruby --- tar zxf ruby-2.3.4.tar.gz cd ruby-2.3.4 ./configure --prefix=/soft/ruby-2.3.4 make make install ln -s /soft/ruby-2.3.4 /soft/ruby cd /soft/ruby cp bin/ruby /usr/local/bin/ cp bin/gem /usr/local/bin/ [root@lbl ruby]# ll /usr/local/bin/{ruby,gem} -rwxr-xr-x. 1 root root 548 Apr 21 00:42 /usr/local/bin/gem -rwxr-xr-x. 1 root root 22306743 Apr 21 00:42 /usr/local/bin/ruby -- install rubygem redis rely on --- wget http://rubygems.org/downloads/redis-3.3.0.gem gem install -l redis-3.3.0.gem gem list --check redis gem -- install redis-rb management tool --- [root@test ruby]# cp /soft/tools/redis-3.2.0/src/redis-trib.rb /usr/local/bin/ [root@test ruby]# ll /usr/local/bin/{ruby,gem,*.rb} -rwxr-xr-x. 1 root root 548 Apr 16 10:05 /usr/local/bin/gem -rwxr-xr-x. 1 root root 60578 Apr 16 10:07 /usr/local/bin/redis-trib.rb -rwxr-xr-x. 1 root root 22306727 Apr 16 10:05 /usr/local/bin/ruby
2. Prepare Node
-- Main Node redis-server /soft/redis/cluster/7000/redis.conf & redis-server /soft/redis/cluster/7100/redis.conf & redis-server /soft/redis/cluster/7200/redis.conf & -- Slave node redis-server /soft/redis/cluster/7001/redis.conf & redis-server /soft/redis/cluster/7101/redis.conf & redis-server /soft/redis/cluster/7201/redis.conf &
3. Creating Clusters
After six nodes are started, the handshake and slot allocation process is completed by using redis-trib.rb create command:
Note: When specifying master-slave nodes, the first three are master nodes; the last three are corresponding slave nodes.
[root@test cluster]# redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7100 127.0.0.1:7101 127.0.0.1:7200 127.0.0.1:7300 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7100 Adding replica 127.0.0.1:7101 to 127.0.0.1:7000 Adding replica 127.0.0.1:7200 to 127.0.0.1:7001 Adding replica 127.0.0.1:7300 to 127.0.0.1:7100 M: b70ce6df43039cd8ef2004a031851668dfe51982 127.0.0.1:7000 slots:0-5460 (5461 slots) master M: 3300b8f899d7f369d7095025954f2069857801c0 127.0.0.1:7001 slots:5461-10922 (5462 slots) master M: 085d2851ef195428786f7df14a2c00fedb6ccec9 127.0.0.1:7100 slots:10923-16383 (5461 slots) master S: 97cf943c9fac35520fdd9426e344f7b7cc390fb8 127.0.0.1:7101 replicates b70ce6df43039cd8ef2004a031851668dfe51982 S: e2fae64bbac1fc28d66c4cb21c5be95be4ba8953 127.0.0.1:7200 replicates 3300b8f899d7f369d7095025954f2069857801c0 S: 5207520b05fd05240a56d132bf90fa4e9dde97cb 127.0.0.1:7300 replicates 085d2851ef195428786f7df14a2c00fedb6ccec9 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join... >>> Performing Cluster Check (using node 127.0.0.1:7000) M: b70ce6df43039cd8ef2004a031851668dfe51982 127.0.0.1:7000 slots:0-5460 (5461 slots) master M: 3300b8f899d7f369d7095025954f2069857801c0 127.0.0.1:7001 slots:5461-10922 (5462 slots) master M: 085d2851ef195428786f7df14a2c00fedb6ccec9 127.0.0.1:7100 slots:10923-16383 (5461 slots) master M: 97cf943c9fac35520fdd9426e344f7b7cc390fb8 127.0.0.1:7101 slots: (0 slots) master replicates b70ce6df43039cd8ef2004a031851668dfe51982 M: e2fae64bbac1fc28d66c4cb21c5be95be4ba8953 127.0.0.1:7200 slots: (0 slots) master replicates 3300b8f899d7f369d7095025954f2069857801c0 M: 5207520b05fd05240a56d132bf90fa4e9dde97cb 127.0.0.1:7300 slots: (0 slots) master replicates 085d2851ef195428786f7df14a2c00fedb6ccec9 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
4. Cluster Integrity Check
Cluster integrity refers to that all slots are allocated to the surviving primary node. As long as one of the 16384 slots is not allocated to the node, the cluster is incomplete.
The redis-trib.rb check command can be used to detect the success of the two clusters created before. The check command only needs to give the address of any node in the cluster to complete the whole cluster checking.
-- The order is as follows: -- redis-trib.rb check 127.0.0.1:7000 redis-trib.rb check 127.0.0.1:7100 -- Output the following information to indicate that all slots in the cluster have been allocated to nodes -- [root@test cluster]# redis-trib.rb check 127.0.0.1:7000 >>> Performing Cluster Check (using node 127.0.0.1:7000) M: b70ce6df43039cd8ef2004a031851668dfe51982 127.0.0.1:7000 slots:0-5460 (5461 slots) master 1 additional replica(s) S: e2fae64bbac1fc28d66c4cb21c5be95be4ba8953 127.0.0.1:7200 slots: (0 slots) slave replicates 3300b8f899d7f369d7095025954f2069857801c0 S: 97cf943c9fac35520fdd9426e344f7b7cc390fb8 127.0.0.1:7101 slots: (0 slots) slave replicates b70ce6df43039cd8ef2004a031851668dfe51982 M: 085d2851ef195428786f7df14a2c00fedb6ccec9 127.0.0.1:7100 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 5207520b05fd05240a56d132bf90fa4e9dde97cb 127.0.0.1:7300 slots: (0 slots) slave replicates 085d2851ef195428786f7df14a2c00fedb6ccec9 M: 3300b8f899d7f369d7095025954f2069857801c0 127.0.0.1:7001 slots:5461-10922 (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
5. Summary
1. The main steps of building a cluster are as follows: Prepare nodes (2) Node handshake [meet command, an asynchronous command] (3) Allocation slot [cluster addslots {number_start..number_end}] 2. redis cluster requires at least 6 nodes, 3 master nodes + 3 slave nodes 3. The master-slave node must establish a replication relationship for the replication relationship [cluster replicate master node ID command] 4. Node handshake protocol uses Gossip protocol to communicate, establishes handshake relationship through meet command, and maintains normal communication through ping/pong command. 5. Nodes added to the cluster can not perform any read-write operations without allocating slots.
Related links: