Introduction to Redis architecture design

Posted by playa4real on Tue, 15 Feb 2022 05:21:30 +0100

Redis master-slave replication
brief introduction
The read-write capability supported by a single redis is still limited. At this time, we can use multiple redis to improve the concurrent processing capability of redis. How these redis cooperate requires a certain architecture design. Here, we first analyze and implement the master / slave architecture

Basic architecture
redis master-slave architecture is shown in the figure:

 

Among them, the master is responsible for reading and writing, synchronizing the data to the save, and the slave node is responsible for reading Quick start practice
Based on Redis, a Master-Slave architecture is designed, including one Master and two Slave. The Master is responsible for reading and writing Redis and synchronizing data to the Slave. The Slave is only responsible for reading, The steps are as follows:

First step:Delete all existing redis container,for example:

docker rm -f  redis Container name
 Step 2: enter your host docker catalogue,Then redis01 Two copies, e.g:

cp -r redis01/ redis02
cp -r redis01/ redis03
Step 3: start three new redis Containers, for example:

docker run -p 6379:6379 --name redis6379 \
-v /usr/local/docker/redis01/data:/data \
-v /usr/local/docker/redis01/conf/redis.conf:/etc/redis/redis.conf \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes

docker run -p 6380:6379 --name redis6380 \
-v /usr/local/docker/redis02/data:/data \
-v /usr/local/docker/redis02/conf/redis.conf:/etc/redis/redis.conf \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes

docker run -p 6381:6379 --name redis6381 \
-v /usr/local/docker/redis03/data:/data \
-v /usr/local/docker/redis03/conf/redis.conf:/etc/redis/redis.conf \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes

Step 4 Detection redis Service role

Start three clients,Log in to three stations respectively redis Container service, via info Command to view roles. By default, the three newly started redis Service roles are master.

127.0.0.1:6379> info replication
1
\# Replication
role:master
connected_slaves:0
master_repl_offset:3860
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:3859




Step 5: Test redis6379 of ip set up

docker inspect redis6379

......
"Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "c33071765cb48acb1efed6611615c767b04b98e6e298caa0dc845420e6112b73",
                    "EndpointID": "4c77e3f458ea64b7fc45062c5b2b3481fa32005153b7afc211117d0f7603e154",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
Step 7: log in again redis6379,Then detect info

[root@centos7964 ~]# docker exec -it redis6379 redis-cli
127.0.0.1:6379> info replication

\# Replication
role:master
connected_slaves:2
slave0:ip=172.17.0.3,port=6379,state=online,offset=2004,lag=1
slave1:ip=172.17.0.4,port=6379,state=online,offset=2004,lag=1
master_failover_state:no-failover
master_replid:5baf174fd40e97663998abf5d8e89a51f7458488
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:2004
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:2004

Step 8: land redis6379 Testing, master Both reading and writing are OK

[root@centos7964 ~]# docker exec -it redis6379 redis-cli
127.0.0.1:6379> set role master6379
OK
127.0.0.1:6379> get role
"master6379"
127.0.0.1:6379>



Step 9: land redis6380 Testing, slave Read only.

[root@centos7964 ~]# docker exec -it redis6380 redis-cli
127.0.0.1:6379> get role
"master6379"
127.0.0.1:6379> set role slave6380
(error) READONLY You can't write against a read only replica.
127.0.0.1:6379>

Read write test analysis in Java. The code is as follows:

@SpringBootTest
public class MasterSlaveTests {
    @Autowired
    private RedisTemplate redisTemplate;

    @Test
    void testMasterReadWrite(){//The profile port is 6379
        ValueOperations valueOperations = redisTemplate.opsForValue();
        valueOperations.set("role", "master6379");
        Object role = valueOperations.get("role");
        System.out.println(role);
    }

    @Test
    void testSlaveRead(){//The profile port is 6380
        ValueOperations valueOperations = redisTemplate.opsForValue();
        Object role = valueOperations.get("role");
        System.out.println(role);
    }

}


Principle analysis of master-slave synchronization

The master-Slave structure of redis can adopt a master-Slave structure. Redis master-Slave replication can be divided into full synchronization (during Slave initialization) and incremental synchronization (master-Slave synchronization) according to whether it is full or not.

Redis full synchronization:
Redis full replication usually occurs in the Slave initialization stage. At this time, the Slave needs to copy all the data on the Master. The specific steps are as follows:
1) Connect the slave server to the master server and send the SYNC command;
2) After receiving the SYNC naming, the master server starts to execute the BGSAVE command to generate rdb files and use the buffer to record all write commands executed thereafter (rdb is to record the written and modified data into the log file, and aof is to record each command of the written and modified data into the journal file);
3) After the master server BGSAVE executes, it sends snapshot files to all slave servers and continues to record the executed write commands during sending;
4) After receiving the snapshot file from the server, discard all old data and load the received snapshot;
5) After sending the snapshot from the master server, start sending the write command in the buffer to the slave server;
6) Finish loading the snapshot from the server, start receiving the command request, and execute the write command from the main server buffer;

Redis incremental synchronization
Redis incremental replication refers to the process of synchronizing the write operations of the master server to the Slave server when the Slave starts working normally after the Slave is initialized. Each incremental write command is received from the master server and executed by the Slave server.

Note: the master-slave structure built here is based on memory. After redis is restarted, it needs to be rebuilt

Section interview analysis
What would you do if redis wanted to support 100000 + concurrency?
It is almost impossible to say that the QPS of a single redis is more than 100000 +, unless there are some special circumstances, such as your machine performance is particularly good, the configuration is particularly high, the physical machine and maintenance are particularly good, and your overall operation is not too complex. The general single machine is tens of thousands. To truly realize the high concurrency of redis, read-write separation is required. For cache, it is generally used to support high concurrency of reads. There are relatively few write requests, and write requests may be thousands of times a second. There will be relatively more requests to read, for example, 200000 times a second. Therefore, the high concurrency of redis can be realized based on the master-slave architecture and the read-write separation mechanism.

What is the replication mechanism of Redis?
(1) redis copies data to the slave node asynchronously.
(2) A master node can be configured with multiple slave node s.
(3) When a slave node is replicated, the block master node will not work normally.
(4) When copying, slave node will not block its own query operations. It will use the old data set to provide services; However, when the copy is completed, the old dataset needs to be deleted and the new dataset needs to be loaded. At this time, the external service will be suspended.
(5) The slave node is mainly used for horizontal expansion and read-write separation. The expanded slave node can improve the read throughput.
 

Redis sentinel mode

brief introduction
Sentinel is a mechanism to achieve high availability under the master-slave architecture mode of Redis.
The Sentinel system composed of one or more Sentinel instance s can monitor any number of master servers and all slave servers under these master servers, and automatically upgrade a slave server under the offline master server to a new master server when the monitored master server enters the offline state, Then, the new master server continues to process the command request instead of the offline master server.

Basic architecture

 

Sentry quick start

Sentry quick start
 Step 1: open three redis In the client window, enter 3 sets respectively redis Inside the container(Container)Specify directory/etc/redis Execute the following statement in:

cat <<EOF > /etc/redis/sentinel.conf 
sentinel monitor redis6379 172.17.0.2 6379 1
EOF

among, The above instructions indicate the to be monitored master, redis6379 Service name, 172.17.0.2 And 6379 are master of ip And port,1 How many sentinel Think of one master In case of failure, master It's really ineffective.

Step 2: in each redis Inside the container/etc/redis Execute the following instructions under the directory to start the sentinel service

redis-sentinel sentinel.conf

Step 3: open a new client connection window and close it redis6379 Service (this service is master Service)

docker stop redis6379

In other client windows, check the log output, such as

410:X 11 Jul 2021 09:54:27.383 # +switch-master redis6379 172.17.0.2 6379 172.17.0.4 6379
410:X 11 Jul 2021 09:54:27.383 * +slave slave 172.17.0.3:6379 172.17.0.3 6379 @ redis6379 172.17.0.4 6379
410:X 11 Jul 2021 09:54:27.383 * +slave slave 172.17.0.2:6379 172.17.0.2 6379 @ redis6379 172.17.0.4 6379

Step 4: log in ip 172.17.0.4 Corresponding services info Detection, for example:
127.0.0.1:6379> info replication

\# Replication
role:master
connected_slaves:1
slave0:ip=172.17.0.3,port=6379,state=online,offset=222807,lag=0
master_failover_state:no-failover
master_replid:3d63e8474dd7bcb282ff38027d4a78c413cede53
master_replid2:5baf174fd40e97663998abf5d8e89a51f7458488
master_repl_offset:222807
second_repl_offset:110197
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:29
repl_backlog_histlen:222779
127.0.0.1:6379>



From the information output above,redis6381 The service has now become master. 

Sentinel Advanced configuration
 about sentinel.conf Content in file,We can also be based on actual needs,Make enhanced configuration,for example:

sentinel monitor redis6379 172.17.0.2 6379 1 
daemonize yes #Background operation
logfile "/var/log/sentinel_log.log" #Run log
sentinel down-after-milliseconds redis6379 30000 #Default 30 seconds

among:
1)daemonize yes Indicates background operation(Default to no)
2)logfile Used to specify the location and name of the log file
3)sentinel down-after-milliseconds express master How long does it take to be considered invalid

for example: be based on cat Instruction creation sentinel.conf file,And add relevant contents.

cat <<EOF > /etc/redis/sentinel.conf
sentinel monitor redis6379 172.17.0.2 6379 1
daemonize yes 
logfile "/var/log/sentinel_log.log"
sentinel down-after-milliseconds redis6379 30000 
EOF

Working principle analysis of sentry:

There is only a master-slave structure. The master redis is responsible for writing and reading, and the slave redis is responsible for writing. The primary redis is too important. If the primary redis goes down, the whole architecture will not be able to write to the cache. Therefore, the sentinel node should be added to the primary redis.

1.ping command status monitoring (similar to notifying everyone that it is still available):
Each Sentinel sends a PING command to all Master, Slave and other Sentinel instances once per second
If an instance (instance, one of the master/slave.sentinel) takes longer than the last valid reply to the PING command
The value specified by down after milliseconds (this item specifies how long a master will be rejected if no valid reply is received
This Sentinel is considered to be unavailable subjectively. The unit is milliseconds (the default is 30 seconds). This instance will be marked as offline subjectively by Sentinel

2. All sentinel confirmed objective offline:
If a Master is marked as subjective offline, all sentinels of the Master are being monitored to {confirm whether the Master has really entered the subjective offline state once per second.

When a sufficient number of sentinels (greater than or equal to the value specified in the configuration file) confirm that the Master has indeed entered the subjective offline state within the specified time range, the Master will be marked as objective offline.

3.info command master-slave monitoring (similar to monitoring master-slave structure available):
In general, the Slave sends all commands to the Master every 10 seconds. When the Master is marked as objectively offline by Sentinel, the frequency of Sentinel sending INFO commands to all Slave of the offline Master will be changed from once in 10 seconds to once per second.

4. Subjective offline is removed (return to normal working state):
If not enough Sentinel agree that the Master has been offline, the objective offline status of the Master will be removed.

If the Master returns a valid reply to Sentinel's PING command again, the Master's subjective offline status will be removed.


1) : each Sentinel sends a PING command to its known Master, Slave and other Sentinel instances once a second.

2) : if the time of an instance from the last valid reply to the PING command exceeds the value specified by the down after milliseconds option (this configuration item specifies how long it takes to expire, a master will be subjectively considered unavailable by the Sentinel. The unit is milliseconds, and the default is 30 seconds), Then this instance will be marked as subjective offline by Sentinel.

3) : if a Master is marked as subjective offline, all sentinels monitoring the Master should confirm that the Master has indeed entered the subjective offline state once per second.

4) : when a sufficient number of sentinels (greater than or equal to the value specified in the configuration file) confirm that the Master has indeed entered the subjective offline state within the specified time range, the Master will be marked as objective offline.

5) : in general, each Sentinel will send INFO commands to all its known masters and Slave every 10 seconds.

6) : when the Master is marked as objectively offline by Sentinel, the frequency of Sentinel sending INFO commands to all Slave of the offline Master will be changed from once in 10 seconds to once per second.

7) : if not enough Sentinel agree that the Master has been offline, the objective offline status of the Master will be removed.
8) : if the Master returns a valid reply to Sentinel's PING command again, the Master's subjective offline status will be removed.

Redis cluster high availability


sketch
The reliability of redis single machine mode is not guaranteed very well, which is prone to single point of failure. At the same time, its performance is also limited by the processing capacity of CPU. Redis must be highly available in actual development, so the single machine mode is not our destination. We need to upgrade the current redis architecture mode.
Sentinel mode achieves high availability, but there is only one master providing services in essence (in the case of read-write separation, the master is also providing services in essence). When the machine memory of the master node is insufficient to support the data of the system, the cluster needs to be considered.
Redis cluster architecture realizes the horizontal expansion of redis, that is, start n redis nodes, distribute and store the whole data in the n redis nodes, and each node stores 1/N of the total data. Redis cluster provides a certain degree of availability through partition. Even if some nodes in the cluster fail or cannot communicate, the cluster can continue to process command requests
 


--------
Copyright notice: This article is the original article of CSDN blogger "Yutian Shuo code", which follows the CC 4.0 BY-SA copyright agreement. For reprint, please attach the source link of the original text and this notice.
Original link: https://blog.csdn.net/maitian_2008/article/details/119482237

Topics: Operation & Maintenance Redis server