Redis6.0 learning notes

Posted by neoform on Tue, 01 Feb 2022 23:35:50 +0100

1, Introduction to Redis overview

1. NoSQL overview

NoSQL refers to not only Sql, which is a non relational database. NoSQL has four categories

  • KV key value
  • Document database (bson, MongoDB)
  • Column storage database (HBase, distributed file system)
  • Figure relational database (storing relationships, such as Neo4j)

2. Redis introduction

Redis (Remote Dictionary Server), that is, remote dictionary service, is an open source service using ANSI C language Write, support network, memory based and persistent log type, key value database , and provides API s in multiple languages.

Redis official website: https://redis.io/

Redis Chinese official website: http://www.redis.cn/

3. Redis installation

windows installation: https://github.com/dmajkic/redis/downloads (not recommended for win development)

Linux Installation:

# Download the latest version of redis from the official website
wget https://download.redis.io/releases/redis-6.2.4.tar.gz
#Move to opt directory
mv redis-6.2.4.tar.gz /opt/
# Just unzip it
tar -zxvf redis-6.2.4.tar.gz
#Basic installation environment
yum install gcc-c++
#Enter the installation package
cd redis-6.2.4/
# Compile and install. Redis default installation path (like most software) / usr/local/bin
make
make install
#Enter the redis service directory
cd /usr/local/bin
#Create profile directory
mkdir conf
#Redis-4.redis/t Conf for backup
cp /opt/redis-6.2.4/redis.conf conf/myredis.conf
#Change to background startup and enter myredis Conf modify daemon to yes
redis-server conf/myredis.conf 
#Client connection test
redis-cli -p 6379
#Close the program and shut down before exit in cli
#View process
ps -ef|grep redis

4. Redis stress test

Redis benchmark official default pressure test tool

Serial numberoptiondescribeDefault value
1-hSpecify the server host name127.0.0.1
2-pSpecify the server port number6379
3-sSpecify server socket
4-cSpecifies the number of concurrent connections50
5-nSpecify the number of requests10000
6-dSpecifies the data size of the SET/GET value in bytes3
7-k1=keep alive 0=reconnect1
8-rSET/GET/INCR uses random keys and Sadd uses random values
9-PTransmission through pipeline1
10-qForce to exit redis. Show only query/sec values
11–csvExport in CSV format
12-lGenerate loop and execute permanently
13-tRun only a comma separated list of test commands
14-IIdle mode. Open only N idle connections and wait
# Start the service and test in the current directory
redis-benchmark -h localhost -p 6379 -c 100 -n 100000

5. Basic knowledge

Redis has 16 databases by default. The first one is used by default. select is used to switch databases. Redis 6 was single threaded before. Because redis is a memory based operation, the CPU is not the bottleneck of redis. The bottleneck of redis is most likely the size of machine memory or network bandwidth. However, the complexity of single thread is low, and there is no need for CPU context switching or locking. Since Redis6 supports multithreading, it is still not enabled by default. It needs to be enabled in redis Conf, where the multi-threaded part of redis is only used to process the reading and writing of network data and protocol analysis, and the execution commands are still executed in a single thread sequence.

127.0.0.1:6379> PING
PONG
#Switch database
127.0.0.1:6379> SELECT 1
OK
127.0.0.1:6379[1]> DBSIZE
(integer) 0
127.0.0.1:6379[1]> set name shawn
OK
127.0.0.1:6379[1]> get name
"shawn"
127.0.0.1:6379[1]> keys *
1) "name"
#Clear database
127.0.0.1:6379[1]> FLUSHDB
OK
127.0.0.1:6379[1]> keys *
(empty array)
#Clear all databases
127.0.0.1:6379[1]> FLUSHALL
OK
#Shut down the service and exit
127.0.0.1:6379[1]> SHUTDOWN
not connected> exit

2, Five basic data types of Redis

Redis is an open source (BSD licensed) in memory data structure storage used as database, cache and message agent. Redis provides data structures such as string, hash, list, set, sorted set with range query, bitmap, super log, geospatial index and stream. Redis has built-in replication, Lua script, LRU eviction, transaction and different levels of disk persistence, and provides high availability through Redis Sentinel and Redis Cluster automatic partition.

Redis has five basic data types:

  • String (string type)
  • Hash (hash, similar to java Map)
  • List
  • Set
  • Zset (ordered set)

1,Redis-key

127.0.0.1:6379> set name shawn
OK
127.0.0.1:6379> keys *
1) "name"
127.0.0.1:6379> exists name #Does it exist
(integer) 1
127.0.0.1:6379> type name #type
string
127.0.0.1:6379> move name 1
(integer) 1
127.0.0.1:6379> set age 1
OK
127.0.0.1:6379> keys *
1) "age"
127.0.0.1:6379> expire age 10 #Set expiration time
(integer) 1
127.0.0.1:6379> ttl age #See how long it expires
(integer) 7
127.0.0.1:6379> get age
(nil)

2. String type

# ======================================================
# set,get,del,append,strlen
# ======================================================
127.0.0.1:6379> set name shawn
OK
127.0.0.1:6379> append name ,hello #Add
(integer) 11
127.0.0.1:6379> strlen name #String length
(integer) 11
127.0.0.1:6379> get name 
"shawn,hello"
127.0.0.1:6379> del name #delete
(integer) 1
127.0.0.1:6379> keys *
(empty array)
# ======================================================
# incr and decr must be numbers to add or subtract, + 1 and - 1.
# The incrby and decrby commands add the specified increment value to the number stored in the key.
# ======================================================
127.0.0.1:6379> set views 0
OK
127.0.0.1:6379> incr views #Self increment 1
(integer) 1
127.0.0.1:6379> decr views #Self subtraction 1
(integer) 0
127.0.0.1:6379> incrby views 10 #Self increase 10
(integer) 10
127.0.0.1:6379> decrby views 5 #Self subtraction 5
(integer) 5
127.0.0.1:6379> get views
"5"
# ======================================================
# range
# getrange gets the value within the specified range, similar to between And, from zero to negative one means all
# Setrange sets the value within the specified range. The format is the specific value of setrange key value
# ======================================================
127.0.0.1:6379> set name hello,shawn
OK
127.0.0.1:6379> getrange name 6 11
"shawn"
127.0.0.1:6379> setrange name 6 shanw22
(integer) 13
127.0.0.1:6379> get name
"hello,shanw22"
# ======================================================
# setex (set with expire) sets the expiration time
# setnx (set if not exist) is set when it does not exist (commonly used for distributed locks)
# ======================================================
127.0.0.1:6379> setex key1 30 hello #Set the key1 value to hello, and the expiration time is 30s
OK
127.0.0.1:6379> ttl key1
(integer) 25
127.0.0.1:6379> setnx key1 hello #Set successfully after expiration
(integer) 1
127.0.0.1:6379> setnx key1 hello #Setting failed
(integer) 0
# ======================================================
# mset sets up multiple groups of k-v at the same time
# mget obtains multiple sets of k-v at the same time
# msetnx When all key All are set successfully, and 1 is returned. If all given key All settings failed(At least one key Already exists),Then return # 0 The operation is atomic and either succeeds or fails
# ======================================================
127.0.0.1:6379> mset k1 v1 k2 v2 k3 v3
OK
127.0.0.1:6379> keys *
1) "k3"
2) "k2"
3) "k1"
127.0.0.1:6379> msetnx k1 v1 k4 v4 #Atomic operation
(integer) 0
127.0.0.1:6379> keys *
1) "k3"
2) "k2"
3) "k1"
# Can cache objects
127.0.0.1:6379> msetnx user:1:name shawn user:1:age 18
(integer) 1
127.0.0.1:6379> mget user:1:name user:1:age
1) "shawn"
2) "18"
# ======================================================
# getset (get before set)
# ======================================================
127.0.0.1:6379> getset db redis
(nil)
127.0.0.1:6379> getset db mysql
"redis"
#=======================================================
#The Value in Redis can be a string or a number

3. List list

List is equivalent to a two-way linked list. It can be used as a queue, a stack and a message queue. It has high operation efficiency at both ends and low operation efficiency in the middle

# ======================================================
# Lpush: inserts one or more values into the list header. (left)
# rpush: inserts one or more values at the end of the list. (right)
# lrange: returns the elements within the specified interval in the list. The interval is specified by offset START and END.
# Where 0 represents the first element of the list, 1 represents the second element of the list, and so on.
# You can also use negative subscripts, with - 1 for the last element of the list, - 2 for the penultimate element of the list, and so on. 
# The lpop command removes and returns the first element of the list. When the list key does not exist, nil is returned
# rpop removes the last element of the list, and the return value is the removed element
# ======================================================
127.0.0.1:6379> lpush list one
(integer) 1
127.0.0.1:6379> lpush list two
(integer) 2
127.0.0.1:6379> lrange list 0 -1 #Get the value in the list
1) "two"
2) "one"
127.0.0.1:6379> rpush list three
(integer) 3
127.0.0.1:6379> lrange list 0 -1
1) "two"
2) "one"
3) "three"
127.0.0.1:6379> lpop list
"two"
127.0.0.1:6379> rpop list
"three"
# ======================================================
# lindex, get the element according to the index subscript (- 1 represents the last, 0 represents the first)
# llen is used to return the length of the list.
# lrem key removes the elements in the list equal to the parameter VALUE according to the VALUE of the parameter COUNT
# ltrim key trims a list, that is to say, only the elements within the specified interval will be retained in the list, and the elements not within the specified interval will be deleted.
# rpoplpush removes the last element of the list, adds it to another list, and returns
# lset key index value sets the value of the element whose index is the index of the list key to value
# ======================================================
127.0.0.1:6379> lindex list 0
"one"
127.0.0.1:6379> llen list
(integer) 1
127.0.0.1:6379> lrem list 2 one #Remove the two values of one. There is only one here, so one is deleted
(integer) 1
127.0.0.1:6379> rpoplpush list mylist
"hello"
127.0.0.1:6379> lset list 0 hi #When the value 0 is updated, an error will be reported if the key does not exist
OK
# ======================================================
# linsert key before/after pivot value is used to insert elements before or after the elements in the list 
# Insert the value value into the list key, before or after the value pivot.
# ======================================================
127.0.0.1:6379> lrange list 0 -1
1) "hi"
2) "hello1"
127.0.0.1:6379> linsert list after hi new #Insert new after hi
(integer) 3
127.0.0.1:6379> lrange list 0 -1
1) "hi"
2) "new"
3) "hello1"

4. Set set

The values in set cannot be repeated. They are unordered and not repeated

# ======================================================
# sadd adds one or more member elements to the collection and cannot be repeated
# smembers returns all members in the collection.
# The sismember command determines whether a member element is a member of a collection.
# scard, get the number of elements in the collection
# rem key value is used to remove one or more member elements from the collection
# ======================================================
127.0.0.1:6379> sadd myset hello
(integer) 1
127.0.0.1:6379> sadd myset shawn
(integer) 1
127.0.0.1:6379> smembers myset 
1) "shawn"
2) "hello"
127.0.0.1:6379> sismember myset hello
(integer) 1
127.0.0.1:6379> scard myset
(integer) 2
127.0.0.1:6379> srem myset hello
(integer) 1
# ======================================================
# The randmember key command returns a random element in a collection.
# spop key is used to remove one or more random elements of the specified key in the collection
# smove SOURCE DESTINATION MEMBER to move the specified member element from the source set to the destination set.
# Digital set and class difference set: sdiff; Intersection: sinter; Union: Sunion (Social Software common concern and other operations)
# ======================================================
127.0.0.1:6379> sadd k1 a b c
(integer) 3
127.0.0.1:6379> sadd k2 b c d
(integer) 3
127.0.0.1:6379> sdiff k1 k2
1) "a"
127.0.0.1:6379> sinter k1 k2
1) "b"
2) "c"
127.0.0.1:6379>  sunion k1 k2
1) "a"
2) "c"
3) "b"
4) "d"

5. Hash hash

Map set, equivalent to key map, is usually used to store frequently changing objects

# ======================================================
# The hset and hget commands are used to assign values to fields in the hash table.
# hmset,hmget Multiple at the same time field-value Set to hash table. The existing fields in the hash table will be overwritten. # hgetall is used to return all fields and values in the hash table.
# hdel is used to delete one or more specified fields in the hash table key
# ======================================================
127.0.0.1:6379> hset myhash field shawn
(integer) 1
127.0.0.1:6379> hget myhash field
"shawn"
127.0.0.1:6379> hmset myhash field hello field1 world
OK
127.0.0.1:6379> hmget myhash field  field1 
1) "hello"
2) "world"
127.0.0.1:6379> hgetall myhash
1) "field"
2) "hello"
3) "field1"
4) "world"
127.0.0.1:6379> hdel myhash field
(integer) 1
# ======================================================
# hlen gets the number of fields in the hash table.
# hexists checks whether the specified field of the hash table exists.
# hkeys gets all field s in the hash table.
# hvals returns the values of all fields in the hash table.
# ======================================================
127.0.0.1:6379> hlen myhash #Number of fields
(integer) 1
127.0.0.1:6379> hexists myhash field
(integer) 0
127.0.0.1:6379> hkeys myhash
1) "field1"
127.0.0.1:6379> hvals myhash
1) "world"
# ======================================================
# hincrby adds the specified increment value to the field value in the hash table
# hsetnx assigns a value to a field that does not exist in the hash table
# ======================================================
127.0.0.1:6379> hset myhash field 1
(integer) 1
127.0.0.1:6379> hincrby myhash field 1
(integer) 2
127.0.0.1:6379> hsetnx myhash field shawn
(integer) 0

6. Ordered set Zset

Zset adds the weight parameter score, which can be used to set the importance of tasks, such as leaderboard application, Top N

# ======================================================
# zadd adds one or more member elements and their fractional values to an ordered set.
# zrange returns an ordered set of members within a specified interval
# ======================================================
127.0.0.1:6379> zadd myset 1 one 2 two
(integer) 2
(integ127.0.0.1:6379> zrange myset 0 -1
1) "one"
2) "two"
# ======================================================
# zrangebyscore returns a list of members of a specified score range in an ordered set. The members of the ordered set are arranged in the order of increasing the score value (from small to large).

# ======================================================
127.0.0.1:6379> zadd salary 2500 Amy 3500 Mike 200 Shawn
(integer) 3
127.0.0.1:6379> zrangebyscore salary -inf +inf #positive sequence
1) "Shawn"
2) "Amy"
3) "Mike"
127.0.0.1:6379> zrangebyscore salary -inf 2500 WITHSCORES #Bring score to query
1) "Shawn"
2) "200"
3) "Amy"
4) "2500"
# ======================================================
# zrem removes one or more members from an ordered set
# The zcard command is used to count the number of elements in the collection.
# zcount calculates the number of members in the specified score interval in the ordered set.
# zrank returns the ranking of specified members in an ordered set. The members of the ordered set are arranged in the order of increasing the score value (from small to large).
# zrevrank returns the ranking of members in an ordered set. The members of the ordered set are sorted according to the decreasing value of the score (from large to small).
# ======================================================
127.0.0.1:6379> zrem salary Shawn
(integer) 1
127.0.0.1:6379> zcard salary
(integer) 2
127.0.0.1:6379> zcount salary -inf 2500
(integer) 1
127.0.0.1:6379> zrank salary Mike #Mike's salary ranking
(integer) 1
127.0.0.1:6379> zrevrank salary Mike
(integer) 0

3, Redis has three special data types

1. GEO location

The data structure of GEO has six common commands: geoadd, geopos, geodist, georadius, georadius by member and gethash
Official documents: https://www.redis.net.cn/order/3685.html

Because Chinese exists, the redis client starts with the command redis cli - P 6379 -- raw

geoadd

# grammar
geoadd key longitude latitude member ...
# Add the given spatial element (latitude, longitude, name) to the specified key.
# These data will be stored in the key in the form of ordered set he, so that commands such as georadius and georadius by member can obtain these elements through location query later.
# The geoadd command accepts parameters in the standard x,y format, so the user must enter longitude first and then latitude.
# The coordinates geoadd can record are limited: areas very close to the poles cannot be indexed.
# The effective longitude is between - 180 and 180 degrees, and the effective latitude is between -85.05112878 and 85.05112878 degrees. When the user tries to enter an out of range longitude or latitude, the geoadd command returns an error.
#===============================================
127.0.0.1:6379> geoadd china:city 116.23 40.22 Beijing
(integer) 1
127.0.0.1:6379> geoadd china:city 106.54 29.40 Chongqing 108.93 34.23 Xi'an 114.02 30.58 Wuhan
(integer) 3

geopos

# grammar
geopos key member [member...]
#Returns the position (longitude and latitude) of all the given positioning elements from the key
#===============================================
127.0.0.1:6379> geopos china:city Beijing
1) 1) "116.23000055551528931"
   2) "40.2200010338739844"

geodist

# The parameter unit of the specified unit must be one of the following units:
# m is expressed in meters.
# km is expressed in kilometers.
# mi is expressed in miles.
# ft is in feet.
# If you do not explicitly specify the unit parameter, GEODIST defaults to meters.
#==================================================
127.0.0.1:6379> geodist china:city Beijing Chongqing km
"1491.6716"

georadious

Take the given latitude and longitude as the center to find out the elements within a certain radius

# Query in the vicinity, such as the realization of the function of nearby people. count limits the number of queries
127.0.0.1:6379> georadius china:city 100 30 1000 km 
Chongqing
 Xi'an
127.0.0.1:6379> georadius china:city 100 30 1000 km withcoord withdist count 2
 Chongqing
635.2850
106.54000014066696167
29.39999880018641676
 Xi'an
963.3171
108.92999857664108276
34.23000121926852302

georadiusbymember

#Find the position next to the specified element
127.0.0.1:6379> georadiusbymember china:city Beijing 1000 km
 Beijing
 Xi'an

geohash

This command will return an 11 character Geohash string

# Redis uses geohash to convert two-dimensional latitude and longitude into one-dimensional string. The longer the string, the more accurate the position. The more similar the two strings, the closer the distance. Rarely used
127.0.0.1:6379> geohash china:city Beijing Chongqing
wx4sucu47r0
wm5z22h53v0

zrem

# zset is used in the bottom layer of geo, so it can be deleted by this method
127.0.0.1:6379> zrange china:city 0 -1
 Chongqing
 Xi'an
 Wuhan
 Beijing
127.0.0.1:6379> zrem china:city Beijing
1

2,Hyperloglog

Redis HyperLogLog is an algorithm used for cardinality statistics. The advantage of HyperLogLog is that when the number or volume of input elements is very, very large, the space required to calculate the cardinality is always fixed, and it is very small, fixed at 12KB. It can be used to count the number of website users (allowing a small amount of fault tolerance)

127.0.0.1:6379> pfadd mykey a b c d e f g #Create the first set of elements
(integer) 1
127.0.0.1:6379> pfcount mykey #Count the cardinality number of elements
(integer) 7
127.0.0.1:6379> pfadd mykey1 s f v b r t y u a  #Create a second group
(integer) 1
127.0.0.1:6379> pfmerge mykey2 mykey mykey1 #Union
OK
127.0.0.1:6379> pfcount mykey2
(integer) 12

3,Bitmaps

Bit storage. Bitmaps (only 0 and 1) can be used to count user information, active, inactive and not logged in

# Use bitmap to record the clocking record of one week in the above case, as shown below:
# Monday: 1, Tuesday: 0, Wednesday: 0, Thursday: 1, Friday: 1, Saturday: 0, Sunday: 0 (1 means clock in, 0 means no clock out)
127.0.0.1:6379> setbit sign 0 1
(integer) 0
127.0.0.1:6379> setbit sign 1 1
(integer) 0
127.0.0.1:6379> setbit sign 2 0
(integer) 0
127.0.0.1:6379> setbit sign 3 0
(integer) 0
127.0.0.1:6379> setbit sign 4 1
(integer) 0
127.0.0.1:6379> setbit sign 5 1
(integer) 0
127.0.0.1:6379> setbit sign 6 0
(integer) 0
127.0.0.1:6379> getbit sign 1 #Check whether to punch in a certain day
(integer) 1  
127.0.0.1:6379> bitcount sign  #Count the number of clocking days this week
(integer) 4

4, Business

In Redis, a single command is executed atomically, but the transaction does not guarantee atomicity and is not rolled back. If the execution of any command in the transaction fails, the remaining commands will still be executed. If it is a compiled error, the transaction cannot be executed. Transactions are executed in order, and transactions have no concept of isolation level.

Redis transaction:

  • Open transaction ()
  • Order to join the team ()
  • Execute transaction ()
127.0.0.1:6379> multi #Open transaction
OK
127.0.0.1:6379(TX)> set k1 v1
QUEUED
127.0.0.1:6379(TX)> set k2 v2
QUEUED
127.0.0.1:6379(TX)> get k1
QUEUED
127.0.0.1:6379(TX)> exec #Execute transaction
1) OK
2) OK
3) "v1"
#=================================
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> discard  #Abandon transaction
OK

Pessimistic lock

Pessimistic lock, as the name suggests, is very pessimistic. Every time you go to get the data, you think others will modify it, so every time you get the data, you will lock it. In this way, if others want to get the data, they will block it until it gets the lock. Traditional relational databases use many such locking mechanisms, such as row lock, table lock, read lock, write lock, etc., which are locked before operation.

Optimistic lock

Optimistic lock, as the name suggests, is very optimistic. Every time I go to get the data, I think others will not modify it, so I won't lock it. However, when updating, you will judge whether others have updated this data during this period. You can use mechanisms such as version number. Optimistic locking is applicable to multi read application types, which can improve throughput. Optimistic locking strategy: the submitted version must be greater than the current version of the record before updating can be performed.

# Monitor with watch and modify it after success. It can be used as an optimistic lock
127.0.0.1:6379> watch money
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set money 100
QUEUED
#At this time, open a new client
127.0.0.1:6379> set money 500
OK
#Go back to the first one, execute the transaction, find the sending changes of the monitored content, and the modification fails
127.0.0.1:6379(TX)> exec
(nil)
# To discard monitoring, use unwatch
# Once EXEC is executed to start the execution of the transaction, the monitoring of variables by WARCH will be cancelled regardless of the successful execution of the transaction. Therefore, when the transaction fails, it is necessary to re execute the WATCH command to monitor the variables and start a new transaction for operation.

5, Java connection Redis operation

1,Jedis

Jedis is a Java connection development tool officially recommended by Redis.

First, create an empty maven project

<!--get into maven Find the latest version in the warehouse-->
<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>3.6.0</version>
</dependency>
// Successfully connected, output pong,jedis has integrated common API s, use Can query
public static void main(String[] args) {
        //Connect to local Redis service
        Jedis jedis = new Jedis("localhost",6379);
        // If the Redis service has set a password, you need the following line. If not, you don't need it
        // jedis.auth("123456");
        System.out.println("Connection successful");
        //Check whether the service is running
        System.out.println("The service is running: "+jedis.ping());
    }

2. Spring boot integrates Redis

Simple use

First in POM Import dependency in XML

<!--spring2.0 Use of rear bottom layer lettuce,Higher performance, 2.0 Previously used jedis-->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

Configure application yml

#Configure redis
spring:
  redis:
    host: 127.0.0.1
    port: 6379

test

@SpringBootTest
class RedisSpringApplicationTests {
	//redisTemplate operates on different data types, and the api is the same as our instructions
	//opsForValue operation character is similar to string
	//opsForList operation List class List
	//opsForSet
	//opsForHash
	//opsForZSet
	//opsForGeo
	//opsForHyperLogLog
	@Autowired
	RedisTemplate<String, String> redisTemplate;
	@Test
	void contextLoads() {
		redisTemplate.opsForValue().set("k","v");
	}
}

Source code analysis

Find the automatic configuration class of Redis in External Libraries, and in redisproperties Class can also see the configuration information

@Configuration(proxyBeanMethods = false)
@ConditionalOnClass({RedisOperations.class})
@EnableConfigurationProperties({RedisProperties.class})
@Import({LettuceConnectionConfiguration.class, JedisConnectionConfiguration.class})
public class RedisAutoConfiguration {
    public RedisAutoConfiguration() {
    }
	@Bean
	//We can customize a redistemplate to replace the default. The following annotation means that if there is a redistemplate object in the Spring container, the automatically configured redistemplate will not be instantiated.
    @ConditionalOnMissingBean(name = {"redisTemplate"})
    @ConditionalOnSingleCandidate(RedisConnectionFactory.class)
    public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
        //By default, there is neither too much operation nor serialization, and object transfer is not allowed
        //We generally use < string, Object >
        RedisTemplate<Object, Object> template = new RedisTemplate();
        template.setConnectionFactory(redisConnectionFactory);
        return template;
    }
    @Bean
    @ConditionalOnMissingBean
	//String type is commonly used. A separate method is proposed
    @ConditionalOnSingleCandidate(RedisConnectionFactory.class)
    public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) {
        StringRedisTemplate template = new StringRedisTemplate();
        template.setConnectionFactory(redisConnectionFactory);
        return template;
    }
}

Tool packaging (can be used directly)

Customize RedisTemplate

@Configuration
public class RedisConfig {
    // Write your own RedisTemplate
    @Bean
    @SuppressWarnings("all")
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        template.setConnectionFactory(redisConnectionFactory);
        //Serialization configuration
        Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class);
        ObjectMapper om = new ObjectMapper();
        // Specify the fields to be serialized, field,get and set, and modifier range. ANY includes private and public
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        // Specify the type of serialized input. The class must be non final modified. Final modified classes, such as string and integer, will run exceptions
        om.activateDefaultTyping(LaissezFaireSubTypeValidator.instance, ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY);
        jackson2JsonRedisSerializer.setObjectMapper(om);
        //Serialization of String
        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
        // The key adopts the serialization method of String
        template.setKeySerializer(stringRedisSerializer);
        // hash adopts String sequence mode
        template.setHashKeySerializer(stringRedisSerializer);
        // value uses jackson
        template.setValueSerializer(jackson2JsonRedisSerializer);
        // The value of hash is jackson
        template.setHashValueSerializer(jackson2JsonRedisSerializer);
        template.afterPropertiesSet();
        return template;
    }
}

Create tool class

@Component
public final class RedisUtil {

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    // =============================common============================
    /**
     * Specify cache expiration time
     * @param key  key
     * @param time Time (seconds)
     */
    public boolean expire(String key, long time) {
        try {
            if (time > 0) {
                redisTemplate.expire(key, time, TimeUnit.SECONDS);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Get expiration time according to key
     * @param key Key cannot be null
     * @return Time (seconds) returns 0, which means it is permanently valid
     */
    public long getExpire(String key) {
        return redisTemplate.getExpire(key, TimeUnit.SECONDS);
    }

    /**
     * Determine whether the key exists
     * @param key key
     * @return true Exists false does not exist
     */
    public boolean hasKey(String key) {
        try {
            return redisTemplate.hasKey(key);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Delete cache
     * @param key One or more values can be passed
     */
    @SuppressWarnings("unchecked")
    public void del(String... key) {
        if (key != null && key.length > 0) {
            if (key.length == 1) {
                redisTemplate.delete(key[0]);
            } else {
                redisTemplate.delete((Collection<String>) CollectionUtils.arrayToList(key));
            }
        }
    }

    // ============================String=============================
    /**
     * Normal cache fetch
     * @param key key
     * @return value
     */
    public Object get(String key) {
        return key == null ? null : redisTemplate.opsForValue().get(key);
    }

    /**
     * Normal cache put
     * @param key   key
     * @param value value
     * @return true Success false failure
     */

    public boolean set(String key, Object value) {
        try {
            redisTemplate.opsForValue().set(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Normal cache put in and set time
     * @param key   key
     * @param value value
     * @param time  Time (seconds) time should be greater than 0. If time is less than or equal to 0, the indefinite period will be set
     * @return true Success false failure
     */
    public boolean set(String key, Object value, long time) {
        try {
            if (time > 0) {
                redisTemplate.opsForValue().set(key, value, time, TimeUnit.SECONDS);
            } else {
                set(key, value);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Increasing
     * @param key   key
     * @param delta How many to add (greater than 0)
     */
    public long incr(String key, long delta) {
        if (delta < 0) {
            throw new RuntimeException("The increment factor must be greater than 0");
        }
        return redisTemplate.opsForValue().increment(key, delta);
    }

    /**
     * Diminishing
     * @param key   key
     * @param delta How many to reduce (less than 0)
     */
    public long decr(String key, long delta) {
        if (delta < 0) {
            throw new RuntimeException("Decrement factor must be greater than 0");
        }
        return redisTemplate.opsForValue().increment(key, -delta);
    }

    // ================================Map=================================
    /**
     * HashGet
     * @param key  Key cannot be null
     * @param item Item cannot be null
     */
    public Object hget(String key, String item) {
        return redisTemplate.opsForHash().get(key, item);
    }

    /**
     * Get all key values corresponding to hashKey
     * @param key key
     * @return Corresponding multiple key values
     */
    public Map<Object, Object> hmget(String key) {
        return redisTemplate.opsForHash().entries(key);
    }

    /**
     * HashSet
     * @param key key
     * @param map Corresponding to multiple key values
     */
    public boolean hmset(String key, Map<String, Object> map) {
        try {
            redisTemplate.opsForHash().putAll(key, map);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * HashSet And set the time
     * @param key  key
     * @param map  Corresponding to multiple key values
     * @param time Time (seconds)
     * @return true Success false failure
     */
    public boolean hmset(String key, Map<String, Object> map, long time) {
        try {
            redisTemplate.opsForHash().putAll(key, map);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Put data into a hash table. If it does not exist, it will be created
     *
     * @param key   key
     * @param item  term
     * @param value value
     * @return true Success false failure
     */
    public boolean hset(String key, String item, Object value) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Put data into a hash table. If it does not exist, it will be created
     *
     * @param key   key
     * @param item  term
     * @param value value
     * @param time  Time (seconds): Note: if the existing hash table has time, the original time will be replaced here
     * @return true Success false failure
     */
    public boolean hset(String key, String item, Object value, long time) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Delete values in hash table
     *
     * @param key  Key cannot be null
     * @param item Item can make multiple non null able
     */
    public void hdel(String key, Object... item) {
        redisTemplate.opsForHash().delete(key, item);
    }

    /**
     * Judge whether there is the value of this item in the hash table
     *
     * @param key  Key cannot be null
     * @param item Item cannot be null
     * @return true Exists false does not exist
     */
    public boolean hHasKey(String key, String item) {
        return redisTemplate.opsForHash().hasKey(key, item);
    }

    /**
     * hash If increment does not exist, it will create one and return the added value
     *
     * @param key  key
     * @param item term
     * @param by   How many to add (greater than 0)
     */
    public double hincr(String key, String item, double by) {
        return redisTemplate.opsForHash().increment(key, item, by);
    }

    /**
     * hash Diminishing
     *
     * @param key  key
     * @param item term
     * @param by   To reduce (less than 0)
     */
    public double hdecr(String key, String item, double by) {
        return redisTemplate.opsForHash().increment(key, item, -by);
    }

    // ============================set=============================
    /**
     * Get all the values in the Set according to the key
     * @param key key
     */
    public Set<Object> sGet(String key) {
        try {
            return redisTemplate.opsForSet().members(key);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * Query from a set according to value whether it exists
     *
     * @param key   key
     * @param value value
     * @return true Exists false does not exist
     */
    public boolean sHasKey(String key, Object value) {
        try {
            return redisTemplate.opsForSet().isMember(key, value);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Put data into set cache
     * @param key    key
     * @param values Values can be multiple
     * @return Number of successes
     */
    public long sSet(String key, Object... values) {
        try {
            return redisTemplate.opsForSet().add(key, values);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * Put set data into cache
     * @param key    key
     * @param time   Time (seconds)
     * @param values Values can be multiple
     * @return Number of successes
     */
    public long sSetAndTime(String key, long time, Object... values) {
        try {
            Long count = redisTemplate.opsForSet().add(key, values);
            if (time > 0) {
                expire(key, time);
            }
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * Gets the length of the set cache
     *
     * @param key key
     */
    public long sGetSetSize(String key) {
        try {
            return redisTemplate.opsForSet().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * Remove with value
     *
     * @param key    key
     * @param values Values can be multiple
     * @return Number of removed
     */
    public long setRemove(String key, Object... values) {
        try {
            Long count = redisTemplate.opsForSet().remove(key, values);
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    // ===============================list=================================
    /**
     * Get the contents of the list cache
     *
     * @param key   key
     * @param start start
     * @param end   End 0 to - 1 represent all values
     */
    public List<Object> lGet(String key, long start, long end) {
        try {
            return redisTemplate.opsForList().range(key, start, end);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * Gets the length of the list cache
     *
     * @param key key
     */
    public long lGetListSize(String key) {
        try {
            return redisTemplate.opsForList().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * Get the value in the list through the index
     *
     * @param key   key
     * @param index When index index > = 0, 0 header, 1 second element, and so on; When index < 0, - 1, footer, - 2, the penultimate element, and so on
     */
    public Object lGetIndex(String key, long index) {
        try {
            return redisTemplate.opsForList().index(key, index);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * Put the list into the cache
     *
     * @param key   key
     * @param value value
     */
    public boolean lSet(String key, Object value) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Put the list into the cache
     * @param key   key
     * @param value value
     * @param time  Time (seconds)
     */
    public boolean lSet(String key, Object value, long time) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Put the list into the cache
     *
     * @param key   key
     * @param value value
     * @return
     */
    public boolean lSet(String key, List<Object> value) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Put the list into the cache
     *
     * @param key   key
     * @param value value
     * @param time  Time (seconds)
     * @return
     */
    public boolean lSet(String key, List<Object> value, long time) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Modify a piece of data in the list according to the index
     *
     * @param key   key
     * @param index Indexes
     * @param value value
     * @return
     */
    public boolean lUpdateIndex(String key, long index, Object value) {
        try {
            redisTemplate.opsForList().set(key, index, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Remove N values as value
     *
     * @param key   key
     * @param count How many to remove
     * @param value value
     * @return Number of removed
     */
    public long lRemove(String key, long count, Object value) {
        try {
            Long remove = redisTemplate.opsForList().remove(key, count, value);
            return remove;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }
}

6, Redis Conf configuration information

config get * get all configuration information

The configuration file information is in / opt / redis-6.2.4 / redis Conf, common configuration information is as follows

# Binding IP means that all ipv4 and ipv6 can be accessed
bind * -::* 
# The protected is enabled by default. If you want to connect to the Internet, you must close it
protected-mode yes
# Port number
port 6379
# Close the connection after the client is idle for N seconds (0 disabled)
timeout 0
# Send TCP ACK to the client to detect whether the connection is disconnected and ensure that the connection is active. The unit is second. It is sent every 300 seconds by default. If it is equal to 0, it is disabled.
tcp-keepalive 300
#==================general=================
# By default, Redis will not run as a daemon. Set to yes if necessary
daemonize yes
# Redis daemon can be managed through upstart and systemd
supervised no
# To run redis in the background process mode, you need to specify the pid file
pidfile /var/run/redis_6379.pid
# log level
loglevel notice
# Specify the log file name. When specified as null, it will be output to the standard output device. If Redis is started as a daemon, when the log file name is empty, the log will be output to / dev/null.
logfile ""
# Number of databases
databases 16
# Display log when redis starts
always-show-logo no
#==================Snapshot snapshot=================
save 900 1 #900s, a key is changed and save is triggered
save 300 10 #In 300s, 10 key s are changed and save is triggered
save 60 10000 #10000 key s changed in 60s, triggering save
# The default value is yes. When RDB is enabled and the last background saving of data fails, does Redis stop receiving data.
stop-writes-on-bgsave-error yes
# Use compressed rdb files yes: compressed, but requires some cpu consumption. No: no compression. More disk space is required
rdbcompression yes
# Whether to verify rdb files is more conducive to the fault tolerance of files, but there will be about 10% performance loss when saving rdb files
rdbchecksum yes
# File name of rdb file
dbfilename dump.rdb
# rdb file delete synchronization lock
rdb-del-sync-files no
# Set the path where rdb files are stored
dir ./
#==================Master slave replication=================
#When the local machine is a slave service, set the IP and port of the master service
replicaof <masterip> <masterport>
#When the local machine is a slave service, set the connection password of the master service.
masterauth <master-password>
#When this machine is a slave service, set the user name of the master service.
masteruser <username>
#When the slave loses its connection with the master or is copying, if yes, the slave will respond to the request of the client. The data may not be synchronized or even have no data. If no, the slave will return the error "SYNC with master in progress"
replica-serve-stale-data yes
#If yes, the slave instance is read-only. If no, the slave instance is readable and writable.
replica-read-only yes
#Specify the period for slave to Ping the master periodically. The default is 10 seconds.
repl-ping-replica-period 10
#The timeout of ping ing the master service from the service. If it exceeds the time set by repl timeout, the slave will think that the master is down.
repl-timeout 60
#After the slave and master are synchronized (psync/sync is sent), is the subsequent synchronization set to TCP_NODELAY. If set to yes, redis will merge small TCP packets to save bandwidth, but it will increase the synchronization delay (40ms), resulting in inconsistent data between the master and slave. If set to no, redis master will send synchronization data immediately without delay.
repl-disable-tcp-nodelay no
#When the master cannot work normally, Redis Sentinel will select a new master from the slave. The smaller the value, the more priority will be selected. However, if it is 0, it means that the slave cannot be selected. The default priority is 100.
replica-priority 100
#==================Security security=================
#The maximum length of ACL log is 128M by default
acllog-max-len 128
#Location of ACL external configuration file
aclfile /etc/redis/users.acl
#The access password of the current redis service is no password by default
requirepass 123456
#You can also set it from the command line
config set requirepass "123456"
#After testing ping, it is found that 127.0.0.1:6379 > ping needs to be verified
NOAUTH Authentication required. # verification
127.0.0.1:6379> auth 123456
OK
#==================Limit=================
# Set the maximum number of customer connections
maxclients 10000
# Memory limit bytes
maxmemory <bytes>
# Maxmemory policy processing policy when the memory reaches the maximum limit
#Volatile LRU: use the LRU algorithm to remove the key that has set the expiration time.
#Volatile random: randomly remove the key s whose expiration time has been set.
#Volatile TTL: remove key s that are about to expire and delete them according to the latest expiration time (supplemented by TTL)  
#Allkeys LRU: use LRU algorithm to remove any key.
#All keys random: remove any keys randomly.
#noeviction: no key is removed, but a write error is returned.
maxmemory-policy noeviction
#==================append only mode=================
#Redis provides two ways of persistent storage: RDB and AOF. RDB is the default configuration (common). AOF needs to be started manually
appendonly no
# Profile name
appendfilename "appendonly.aof"
# appendfsync aof persistence policy configuration
# no means that fsync is not executed. The operating system ensures that the data is synchronized to the disk, and the speed is the fastest
# always indicates that fsync is executed for each write to ensure data synchronization to disk
# everysec means that fsync is executed every second, which may result in the loss of this 1s data
appendfsync everysec
#Whether Appendfsync can be used during rewriting, and the default no can be used to ensure data security
No-appendfsync-on-rewrite no
# Set overridden base value
Auto-aof-rewrite-min-size 100
#Set overridden base value
Auto-aof-rewrite-percentage 64mb
#==================Cluster cluster=====================
# Enable cluster mode
cluster-enabled yes      
# Sets the timeout milliseconds of the current node connection
cluster-node-timeout 15000     
#Set the path of the current node cluster configuration file
cluster-config-file node_6381.conf             

7, Redis persistence

Redis is an in memory database. If the database state in memory is not saved to disk, the database state in the server will disappear once the server process exits. So redis provides persistence function!

1,RDB(Redis DataBase)

Write the data set Snapshot in memory to disk within the specified time interval, that is, the jargon Snapshot. When it is restored, it reads the Snapshot file directly into memory

Redis will separately create (fork) a sub process for persistence. It will first write the data to a temporary file. After the persistence process is completed, redis will use this temporary file to replace the last persistent file. In the whole process, the main process does not perform any IO operations. This ensures extremely high performance. If large-scale data recovery is needed and the integrity of data recovery is not very sensitive, RDB method is more efficient than AOF method. The disadvantage of RDB is that the data after the last persistence may be lost, and memory needs to be consumed during backup.

RDB snapshot

# For RDB, three mechanisms are provided: save, bgsave and auto trigger.
# Automatically triggered in redis Configure under conf
# rdb files saved in the three cases can be configured and are in the current directory by default
127.0.0.1:6379> bgsave
Background saving started
127.0.0.1:6379> save
OK
# To recover Redis data, you only need to dump If the RDB file is placed in the corresponding dir directory, Redis will automatically recover the data
127.0.0.1:6379> config get dir
1) "dir"
2) "/usr/local/bin"

2,AOF(Append Only File)

Each write operation is recorded in the form of a log. All instructions executed by redis are recorded (read operations are not recorded). Only files can be added, but files cannot be overwritten. Redis will read the file and rebuild the data at the beginning of startup. In other words, if redis restarts, the write instructions will be executed from front to back according to the contents of the log file to complete the data recovery.

#If aof needs to be used, it needs to be enabled in the configuration information
#aof normal recovery
#Copy a copy of the aof file with data and save it to the corresponding directory (con fi g get dir) for recovery: restart redis and then reload
#If the aof file is abnormal, redis cannot be started and can be repaired
redis-check-aof --fix appendonly.aof

3. Summary

1. RDB persistence mode can snapshot and store data within a specified time interval
2. Aof persistence records every write operation to the server. When the server restarts, these commands will be re executed to recover the original data. The AOF command additionally saves each write operation to the end of the file with redis protocol. Redis can also rewrite the AOF file in the background, so that the volume of the AOF file will not be too large.
3. Only cache, you can not use any persistence
4. When two persistence methods are enabled at the same time

  • In this case, when redis restarts, AOF files will be loaded first to recover the original data, because in general, the data set saved in AOF files is more complete than that saved in RDB files.
  • RDB data is not real-time. When using both, the server will only find AOF files when restarting. It is recommended not to only use AOF, because RDB is more suitable for backing up the database (AOF is changing and hard to back up), fast restart, and there will be no potential bugs of AOF. Keep it as a means in case.

5. Performance recommendations

  • Because RDB files are only used for backup purposes, it is recommended to only persist RDB files on Slave, and only backup once every 15 minutes is enough. Only save 900 1 is retained.
  • If you Enable AOF, the advantage is that in the worst case, only less than two seconds of data will be lost. The startup script is relatively simple. You can only load your own AOF file. The cost is that it brings continuous IO and AOF rewrite. Finally, the new data generated in the rewriting process will be written to the new file, resulting in almost inevitable blocking. As long as the hard disk is allowed, the frequency of AOF rewriting should be minimized. The default value of 64M for the basic size of AOF rewriting is too small, which can be set to more than 5G. By default, it exceeds 100% of the original size, and the size rewriting can be changed to an appropriate value.
  • If AOF is not enabled, high availability can be achieved only by master slave replay, which can save a lot of IO and reduce the system fluctuation caused by rewriting. The price is that if the Master/Slave is dumped at the same time, more than ten minutes of data will be lost. The startup script also needs to compare the RDB files in the two Master/Slave and load the newer one. This is the architecture of microblog.

8, Redis publish and subscribe

Redis publish / subscribe (pub/sub) is a message communication mode: the sender (pub) sends messages and the subscriber (sub) receives messages. Redis client can subscribe to any number of channels.

Common commands for publishing and subscribing to redis

Serial numbercommanddescribe
1PSUBSCRIBE pattern [pattern ...]Subscribe to one or more channels that match the given pattern
2PUBSUB subcommand [argument [argument ...]]View subscription and publishing system status
3PUBLISH channel messageSend information to the specified channel
4PUNSUBSCRIBE [pattern [pattern ...]]Unsubscribe from all channels in a given mode
5SUBSCRIBE channel [channel ...]Subscribe to information for a given channel or channels
6UNSUBSCRIBE [channel [channel ...]]Unsubscribe from the given channel

test

#Start a client and subscribe to a channel
127.0.0.1:6379> SUBSCRIBE shawn
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "shawn"
3) (integer) 1
#Open another client and send a message
127.0.0.1:6379> PUBLISH shawn hello
(integer) 1
#The first client receives a subscription message
1) "message"
2) "shawn"
3) "hello"

principle

  • Redis is implemented in C by analyzing pubsub C file to understand the underlying implementation of publish and subscribe mechanism, so as to deepen the understanding of redis

  • Redis implements PUBLISH and SUBSCRIBE functions through PUBLISH, SUBSCRIBE, PSUBSCRIBE and other commands

  • After subscribing to a channel through the SUBSCRIBE command, a dictionary is maintained in redis server. The keys of the dictionary are channels, and the value of the dictionary is a linked list, which stores all clients subscribing to the channel. The key of SUBSCRIBE command is to add the client to the subscription linked list of a given channel

  • Send a message to subscribers through the PUBLISH command. Redis server will use the given channel as the key, find the linked list of all clients subscribing to the channel in the channel dictionary maintained by it, traverse the linked list, and PUBLISH the message to all subscribers

  • Pub/Sub literally means Publish and Subscribe. In Redis, you can set a key value for message publishing and message subscription. When a key value is published, all clients subscribing to it will receive corresponding messages

Usage scenario

  • Building real-time message system with Pub/Sub
  • Real time chat system built by Pub/Sub

9, Redis master-slave, sentinel and cluster

The experiments here are all on one machine, so only the ports are modified. During formal operation, they should be distributed in different machines

1. Master-slave replication

Master Slave replication refers to copying data from one Redis server to other Redis servers. The former is called master / leader and the latter is called Slave / follower; Data replication is unidirectional and can only be from master node to Slave node. Master mainly writes, Slave mainly reads. By default, each Redis server is the primary node, and the memory of a single Redis should not exceed 20G.

For e-commerce with more reading and less writing

Master-slave replication

  • Data redundancy: master-slave replication realizes the hot backup of data, which is a data redundancy method other than persistence.
  • Fault recovery: when the master node has problems, the slave node can provide services to achieve rapid fault recovery; In fact, it is a kind of redundancy of services.
  • Load balancing: on the basis of master-slave replication, combined with read-write separation, the master node can provide write services, and the slave node can provide read services (that is, the application connects to the master node when writing Redis data, and the application connects to the slave node when reading Redis data), sharing the server load; Especially in the scenario of less writing and more reading, the concurrency of Redis server can be greatly improved by sharing the reading load among multiple slave nodes.
  • High availability cornerstone: in addition to the above functions, master-slave replication is also the basis for sentinels and clusters to implement. Therefore, master-slave replication is the basis for Redis high availability.

Environment configuration

#see information
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:c75ea02227de8882aa3c60c9b22559e3076270b0
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

Configure master-slave replication, with at least one master and two slaves

#Generate three configuration files, myredis Conf I already exist here
cp conf/myredis.conf conf/myredis01.conf 
cp conf/myredis.conf conf/myredis02.conf
#Secondly, modify the configuration file. Here is one of my configurations
#Modify the port port number, daemon to yes, pidfile file, logfile file and dbfilename file successively
port 6370
daemonize yes
pidfile /var/run/redis_6370.pid
logfile "6370.log"
dbfilename "dump6370.rdb"
#Ensure that the documents will not be repeated. Finally, start the service and start three terminals
redis-server conf/myredis.conf 
redis-server conf/myredis01.conf 
redis-server conf/myredis02.conf 
#Check whether it is opened successfully
ps -ef|grep redis

Command line configuration (temporary effect, usually configuration file configuration)

#It can only be configured in the slave. My two slave ports are 6370 and 6371
127.0.0.1:6370> SLAVEOF 127.0.0.1 6379
OK
127.0.0.1:6371> SLAVEOF 127.0.0.1 6379
OK
#At this time, you can see that the two slaves have been connected by viewing the host information
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6371,state=online,offset=280,lag=1
slave1:ip=127.0.0.1,port=6370,state=online,offset=280,lag=1
master_failover_state:no-failover
master_replid:d0f2fce55c4ee9f4403b7ff342ca7e43ef38d470
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:280
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:280
# The slave can be changed to the master again by using this command
127.0.0.1:6371> SLAVEOF no one 

Profile configuration

# Enter the REPLICATION section and modify the slave configuration file
replicaof <masterip> <masterport>

Test details

  • The host can read and write, the slave can only read, and the slave can automatically copy the host content
  • When the host is down, the slave can only read
  • If the slave goes down after command line operation, it will become the host after restart, and the latest information of the host can be obtained after reset

Replication principle

After the slave is successfully started and connected to the master, it will send a sync command. After receiving the command, the master will start the background save process and collect all the received commands for modifying the dataset. After the background process is executed, the master will transfer the whole data file to the slave and complete a complete synchronization. However, as long as the master is reconnected, a full synchronization (full replication) will be performed automatically

  • Full copy: after receiving the database file data, the slave service saves it and loads it into memory.
  • Incremental replication: the Master continues to transmit all the new collected modification commands to the slave in turn to complete the synchronization

2. Sentinel mode

Sentinel mode can monitor whether the host fails in the background. If it fails, it will automatically convert from the library to the main library according to the number of votes. Sentinel mode is a special mode. Firstly, Redis provides sentinel commands. Sentinel is an independent process. As a process, it will run independently. The principle is that the sentinel sends a command and waits for the response of the Redis server, so as to monitor multiple running Redis instances.

In general, six processes need to be started in sentinel mode. Assuming that the main server is down and sentinel 1 detects this result first, the system will not immediately carry out the failover process, but sentinel 1 subjectively believes that the main server is unavailable, which becomes a subjective offline phenomenon. When the following sentinels also detect that the primary server is unavailable and the number reaches a certain value, a vote will be held between sentinels. The voting result is initiated by a sentinel for "failover". After the switch is successful, each sentinel will switch the host from the server monitored by themselves through the publish and subscribe mode. This process is called objective offline.

Test configuration

# One master and two slave configurations remain unchanged and join the sentinel process
# Enter redis directory
cd /usr/local/bin/
# Copy 3 sentinel configuration files sentinel conf
cp /opt/redis-6.2.4/sentinel.conf conf/sentinel1.conf 
cp /opt/redis-6.2.4/sentinel.conf conf/sentinel2.conf 
cp /opt/redis-6.2.4/sentinel.conf conf/sentinel3.conf 

Modify three sentinel configuration files in turn to ensure that the port, pid file and log file do not have the same name, and the log file is in the / tmp directory

port 26381
daemonize yes
pidfile "/var/run/redis-sentinel26381.pid"
logfile "26381.log"
dir "/tmp"
#Here is the most important. The last four are the alias of the master. The ip, port number and ticket number of the master can become the host. Generally, half of the sentry plus one
sentinel monitor mymaster 127.0.0.1 6379 2
#Start in sequence under the current directory, that is, complete the sentinel mode
redis-sentinel conf/sentinel1.conf
redis-sentinel conf/sentinel2.conf
redis-sentinel conf/sentinel3.conf
#At this time, if the 6379 host is down, the sentinel mode will automatically elect a new master server. When the 6379 is restarted, it will automatically become a slave, and you can enter / tmp to view the log

Detailed explanation of configuration file

# The port on which the sentinel sentinel instance runs is 26379 by default
port 26379
# Whether to start in the background
daemonize yes
# Runtime PID file
pidfile /var/run/redis-sentinel.pid
# Log file (absolute path)
logfile "/opt/app/redis6/sentinel.log"
# Data directory
dir "/tmp"
# The ip port of the redis master node monitored by sentry sentinel 
# The master name can be named by itself. The name of the master node can only be composed of letters A-z, numbers 0-9 and the three characters ". -" form.
# Quorumwhen the sentinel of these quorum s thinks that the master master node is lost, it objectively thinks that the master node is lost
# sentinel monitor <master-name> <ip> <redis-port> <quorum>
sentinel monitor mymaster 127.0.0.1 6379 2
# When the requirepass foobared authorization password is enabled in the Redis instance, all clients connecting to the Redis instance must provide the password
# Set the password of sentinel sentinel connecting master and slave. Note that the same authentication password must be set for master and slave
# sentinel auth-pass <master-name> <password>
sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
# How long the sentry connects to the master node and does not respond means that the master node has hung up, in milliseconds. The default is 30000 milliseconds, 30 seconds.
sentinel down-after-milliseconds mymaster 30000
# How many slave nodes can synchronize the new master node during failover. The smaller this value is, the longer it takes to complete the failover. The larger this value is, the more slave nodes are temporarily blocked and unavailable due to synchronous data
sentinel parallel-syncs mymaster 1
# Timeout for failover, 3 minutes by default
# sentinel failover-timeout <master-name> <milliseconds>
sentinel failover-timeout mymaster 180000
#Setting notification script and client reconfig script with SENTINEL SET is prohibited
sentinel deny-scripts-reconfig yes
# Configure the script to be executed when an event occurs. You can notify the administrator through the script. For example, send an email to notify relevant personnel when the system is not running normally.
# Notification script
# sentinel notification-script <master-name> <script-path>
sentinel notification-script mymaster /var/redis/notify.sh
# Client reconfiguration master node parameter script
# When a master changes due to failover, this script will be called to notify the relevant clients of the change of the master address.
# sentinel client-reconfig-script <master-name> <script-path>
sentinel client-reconfig-script mymaster /var/redis/reconfig.sh

3. Redis cluster

Redis cluster is composed of multiple nodes in which redis data is distributed. The nodes in the cluster are divided into master nodes and slave nodes. Only the master node is responsible for reading and writing requests and maintaining cluster information, and the slave node only copies the data and status information of the master node. Redis cluster partitions the data in the way of hash partition. Hash partition is to hash the characteristic value of the data, and then decide which node the data is placed on according to the hash value. Among them, redis cluster cluster is decentralized, and each node is equal. Any node connected can obtain and set data.

Redis cluster has the following functions:

  • Data partition: break through the storage limitation of single machine and distribute the data to multiple different nodes for storage;
  • Load balancing: each master node can handle read and write requests, which improves the concurrency ability;
  • High availability: the cluster has a failover capability similar to sentinel mode to improve the stability of the cluster;

Normal port: that is, the client access port, such as the default 6379;

Cluster port: common port number plus 10000. For example, the cluster port of 6379 is 16379, which is used for communication between cluster nodes

to configure

Assign 6 profiles

IDIPHosttypeSlave node
A127.0.0.16381mainAA
B127.0.0.16382mainBB
C127.0.0.16383mainCC
AA127.0.0.16391from/
BB127.0.0.16392from/
CC127.0.0.16393from/
#Modify redis. In the six directories respectively Conf file, which is mainly used to open the cluster and modify the port and file path
#Give one example
port 6381
port 26381
daemonize yes
pidfile "/var/run/redis-sentinel26381.pid"
logfile "26381.log"
cluster-enabled yes                            # Enable cluster mode
cluster-node-timeout 15000                     # Sets the timeout milliseconds of the current node connection
#Set the path of the current node cluster configuration file, which is automatically maintained by the cluster. If any, start it using the configuration in the file; If not, initialize the configuration and save it to a file.
cluster-config-file node_6381.conf             
#=========================================
#Start. The first three represent the master and the last three represent the slave
#Here -- cluster replicas means that each master node has several replica nodes
redis-cli --cluster create 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6391 127.0.0.1:6392 127.0.0.1:6393 --cluster-replicas 1
# -c. Log in using cluster mode
redis-cli -c [-h 192.168.30.128] -p 7001 [-a 123456]    
#Cluster status
CLUSTER INFO     
#List node information
CLUSTER NODES                  

10, Redis cache

1. Cache penetration

Cache penetration refers to querying a data that does not exist at all, and neither the cache layer nor the persistence layer will hit. In daily work, in consideration of fault tolerance, if the data cannot be found from the persistence layer, it will not be written to the cache layer. Cache penetration will lead to the non-existent data to be queried in the persistence layer every request, losing the significance of cache protection and back-end persistence

2. Buffer breakdown

Attention should be paid to the following two problems in the system: the current key is a hot key (such as a second kill activity), and the concurrency is very large; Rebuilding the cache cannot be completed in a short time. It may be a complex calculation, such as complex SQL, multiple IO, multiple dependencies, etc. At the moment of cache failure, a large number of threads rebuild the cache, resulting in increased back-end load and even application crash.

3. Cache avalanche

Because the cache layer carries a large number of requests, it effectively protects the storage layer. However, if the cache layer is unavailable for some reasons (downtime) or a large number of caches fail in the same time period due to the same timeout (a large number of key s fail / hotspot data fail), a large number of requests directly reach the storage layer, and the pressure on the storage layer is too high, resulting in system avalanche.

Reference article:
https://blog.csdn.net/wsdc0521/article/details/106316972
https://blog.csdn.net/weixin_43445935/article/details/115393205
https://www.bilibili.com/video/BV1S54y1R7SB?p=12&spm_id_from=pageDriver

Topics: Java Redis Cache