1, Background
Because a set of environment is to be migrated to aws redis, it was originally intended to upload rdb to s3, and then use s3 seed to load rdb files into the new cluster when creating a new cluster. This is more convenient and fast, but there are two problems. One is that creation fails when large rdb files are applied, and the other is that this method can not be synchronized in real time, The desired smooth switching effect cannot be achieved
2, Use tools
Tool Name: redis shake
1. Download
Download address: https://github.com/alibaba/RedisShake/tags
wget https://github.com/alibaba/RedisShake/releases/download/release-v2.0.3-20200724/redis-shake-v2.0.3.tar.gz
2. Decompression
tar xzf redis-shake-v2.0.3.tar.gz
3. Edit profile
cd redis-shake-v2.0.3 vim redis-shake.conf
4. Detailed description of configuration file
Parameter name | explain |
---|---|
source.type | According to the schema selection of the source database (self built database), the value is: Standard: master-slave schema. Cluster: cluster architecture. |
source.address | The connection address and port number of the source library are separated by an English colon (:). |
source.password_raw | The password of the source library. If the source library does not have a password set, it is unnecessary to fill in |
target.type | Same as source type |
target.address | Same as source address |
target.password_raw | Same as source password_ raw |
target.db | Migrate the data of all libraries in the source library to the specified library of the target library, with the value range of 0 ~ 15. Description the default value is - 1, which means the function is not enabled. |
key_exists | When the Key in the source database is the same as the Key in the target database, the data writing strategy is adopted. Value: rewrite: overwrite and write to the target database. none: the default value is to stop running redis shake and prompt conflicting keys. ignore: directly skip the currently migrated Key, retain the data of the target database, and continue the data migration |
filter.db.whitelist | For the name of the library to be migrated, use English semicolon (;) between multiple library names Separate, which is empty by default, that is, migrate all libraries. Example: 0; one |
filter.db.blacklist | Database names that do not need to be migrated (i.e. blacklist), and multiple database names use English semicolons (;) separate. It is blank by default, that is, no blacklist is set. Example: 0; one |
parallel | The number of concurrent threads that redis shake performs migration. Appropriately increasing this value can improve synchronization performance |
5. Configuration file example
# Single point source.type = standalone source.address = redis_ip:redis_port # password of db/proxy. even if type is sentinel. source.password_raw = password # colony target.type = cluster target.address = redis_ip:redis_port;redis_ip:redis_port;redis_ip:redis_port # If you don't have a password, you don't have to fill it in target.password_raw = key_exists = rewrite
6. Errors and Solutions
6.1 **Q: Error reporting during synchronization** 2021/07/23 09:27:33 [PANIC] read error, please check source redis log or network [error]: EOF [stack]: 1 /Users/tongyao.lty/Work/RedisShake/src/redis-shake/common/utils.go:930 redis-shake/common.Iocopy 0 /Users/tongyao.lty/Work/RedisShake/src/redis-shake/dbSync/syncBegin.go:92 redis-shake/dbSync.(*DbSyncer).runIncrementalSync View source side redis journal events=rw cmd=psync scheduled to be closed ASAP for overcoming of output buffer limits. terms of settlement: config set client-output-buffer-limit 'normal 0 0 0 slave 0 0 0 pubsub 268435456 67108864 60' github Answers to questions on: https://github.com/alibaba/RedisShake/wiki / frequently asked questions? spm=a2c4g.11186623.2.28.42012b83tcBOfH#2- error reporting solution 6.2 Q: Support breakpoint continuation? A: At present, the master-slave version and some cluster versions are supported. Please check for details wiki The corresponding document on the. 6.3 Q: For some clouds redis,For example, some cloud vendors do not support it sync/psync Permission, how to migrate? A: from v1.4 At the beginning of the version, we support it rump Scan migration mode can cope with sync/psync The permission is not open. This mode only supports full quantity and does not support increment. Please refer to wiki Use documents on. 6.4 Q: Why does my source master and slave have 0-15,There are 16 logics in total db,The cluster version synchronized to the destination is only db0 What happened? A: Because the cluster version only supports db0,therefore db1-15 All data will not be synchronized to the destination. 6.5 Q: How to filter lua script? A: from v1.6.9 From version, users can set filter.lua Parameters. Please refer to the description of the configuration file for details. And at 5.0 of redis In version, all lua Will be converted into transaction operations, so they can't be filtered. 6.6 Q: If filtering is enabled key Function of? A: User can set filter.key.whitelist Let specify prefix key You can also specify filter.key.blacklist Let specify prefix key No, the rest pass. Note that only one of the two parameters can be specified at most. For example, filter.key.whitelist = abc;xxx;efg Will let abc,abc1,xxxyyyy Pass, and kkk,mmm Will not pass. 6.7 Q: RedisShake Is it supported codis and twemproxy? A: support. However, please set big_key_threshold = 1,And enable filter.lua = true. 6.8 Q: How to control the number of concurrent synchronization? A: Users can source.rdb.parallel Parameter controls the number of concurrent synchronization, such as source.rdb.parallel = 4 It means that only four full quantities will be passed at a time. A new full synchronization will be started only after one full synchronization is completed. For example, there are 8 slices at the source end. If 4 is set, it means that only 4 slices will be synchronized at the same time. If the first slice is completed in full( restore Mode complete or sync If the mode enters the incremental mode, the fifth full synchronization will be started, and so on until all are completed or enter the incremental synchronization. 6.9 Q: Is it supported db Mapping, such as source side db2 Synchronization to destination db10? A: Not supported. At present, only all source terminals are supported db Synchronize to a destination db,Such as setting target.db = 10,Then all source side logic db Will be synchronized to the destination db10. 6.10 Q: decode Redis resp failed. [error]: EOF A: If the user source side slave Pull on the source end master Follow slave If your connection is disconnected, then from slave Pull will fail. 6.11 Q: [PANIC] read sync response = ''. EOF A: The user needs to check the source side redis The log of the node, generally speaking, is that the source side has save rdb During this period, he refused to drop the newcomers psync Request, which may occur in some redis edition. The user needs to wait for some time and try again. 6.12 Q: -ERR Can\'t SYNC while not connected with my master A: See this above EOF Problems. 6.13 Q: target key name is busy A: Corresponding to the destination key Already exists. There are three solutions, one of which can be selected: Delete the error reported by the destination key. Enable key_exists = rewrite(Source end Key Covering the destination, v1.6.27 The version is supported in v1.6.27 Before rewrite = true) Enable key_exists = ignore(Ignore error reporting key,v1.6.27 Version start support) 6.14 Q: -ERR Unable to perform background save A: Please check the source side redis Log, source side bgsave Failure, usually due to insufficient memory or disk write failure. 6.15 Q: OOM command not allowed when used memory > 'maxmemory' A: When writing to the destination, the memory exceeds the original specification. 6.16 Q: [PANIC] parse rdb entry error, if the err is :EOF, please check that if the src db log has client outout buffer oom, if so set output buffer larger A: Please check the source side redis Logs are usually caused by too long full synchronization or too large increment output buffer Fill it up. There are usually several solutions: enlarge shake Increase the concurrency of full synchronization parallel. Modify and increase the source side output buffer By modifying the size of output-buffer-limit Parameters. (recommended) Resynchronize during the low peak period. 6.17 Q: restore command error key:xxx err:-ERR server closed connection A: It means that the destination has closed the connection. If it is written key If it is too large, it can be reduced big_key_threshold yes key Split. 6.18 Q: [PANIC] auth failed[-ERR unknown command ''] A: set up source.auth_type = auth and target.auth_type = auth. The problem is v2.6.26 The version has been fixed. Join us:https://github.com/alibaba/RedisShake/issues/237 6.19 Q: ERR redis tempory failure A: For some cluster versions, such as Alibaba cloud, if the background db An error will be reported if the node has switched from master to slave. 6.20 Q: Error: NOSCRIPT No matching script. Please use EVAL A: This usually occurs at the destination. This error indicates the corresponding error lua The script has been lost. stay redis edition v4.0.4 This problem will not appear in the future, in 4.0.4 Previously, the source side was connected to slave Node, this problem may occur. The solution is as follows: Upgrade the source side to 4.0.4 Later versions Source end connection master. Manually add missing at destination lua script Source enable redis.replicate_commands() 6.21 Q: checksum validation failed A: Source end redis Disabled checksum Options, you can config set rdbchecksum Start. 6.22 Q: ERR 'EVAL' command keys must in the same slot A: lua Script operated key Not in the same slot,It usually occurs when the destination is a cluster version. You can modify it yourself Lua Script, or what will be involved key adopt hashtag And so on slot Inside. 6.23 Q: Conf.Options check failed: get target redis version failed[EOF] A: For some versions, such as twemproxy,It is not supported to obtain the information of the destination redis edition. Users can set target.version Force the destination version to be set. 6.24 Q: ERR syntax error A: It usually appears at the destination redis Version less than source redis Version of, so some data formats are incompatible. 6.25 Q: [xxx] redis address should be all masters or all slaves, master:[xxx], slave[xxx] A: If the user enters the cluster version, you need to enter only the cluster version master Or only slave,The specified role cannot be an existing role master Again slave. participate in issue#149 6.26 Q: do dump with failed[EOF] A: Source end redis When the connection is disconnected, the user needs to check the source end redis Log status of. Usually someone key Too large, such as over 512 M(No solution), or output buffer It's caused by full. 6.27 Q: Error: CROSSSLOT Keys in request don\'t hash to the same slot A: This represents the information in the user's single request key Not distributed to a slot,And this is redis Strong constraints of the cluster itself. stay redis-shakev1.6.27 At first, the constraints of some commands were released from one slot Let go to a slice db. 6.28 Q: ERR DUMP payload version or checksum are wrong A: Usually appears in rump Mode. The source version is greater than the destination version. For example, the source version is 4.0,Destination 2.8,Some data structure formats have been modified to prevent synchronization. User can set big_key_threshold = 1 Bypass this restriction. 6.29 Q: [PANIC] restore command response = \'ECONNTIMEOUT: dial tcp xxx:1000: connect: cannot assign requested address', should be 'OK' A: The local port is exhausted. You need to check the cause. 6.30 Q: ERR syntax error A: It also occurs when the source version is larger than the destination version. For example, the source version is 4.0,Destination 2.8. 6.31 Q: run ChooseNodeWithCmd failed[transaction command[xxxx] key[yyyyy] not hashed in the same slot] A: In a transaction key Need to distribute to a slot It usually appears that the source end is inconsistent with the destination end (for example, the source end is the master-slave, and the destination end is the cluster), or the source end is the cluster version slot The distribution is inconsistent with the destination. 6.32 Q: return error[ERR Bad data format], ignore it and try to split the value A: It also occurs when the source version is larger than the destination version. For example, the source version is 4.0,Destination 2.8. User can set big_key_threshold = 1 Bypass this restriction. 6.33 Q: lua Synchronous error reporting? A: Common errors are as follows: ERR bad lua script for redis cluster, first parameter of redis.call/redis.pcall must be a single literal string \"-ERR bad lua script for redis cluster, all the keys that the script uses should be passed using the KEYS array\r\n"` \"-ERR for redis cluster, eval/evalsha number of keys can\'t be negative or zero\r\n\" "-ERR eval/evalsha command keys must in same slot\r\n" When the destination is alicloud cluster version. Basically, the problem is that users do not write according to the specifications of the cluster version lua Scripts usually occur when the master-slave configuration changes to the cluster. At present redis-shake The version already supports the of import errors lua Filter Cloud redis-shake Please set the version in the configuration file"filter.lua = 2"Restart again redis-shake Yes, usually you need to add"rewrite = true". Open Source redis-shake Please set the version filter.lua = true,Usually you need to add rewrite = true(1.6.27 Previous versions), or key_exists = rewrite(1.6.27 And later versions). Q: ERR unknown command 'ISCAN' A: This is added to the user configuration item scan.special_cloud = aliyun_cloud Option. This option is only for alicloud's cluster version. If the source side is the master-slave version, please remove this option. 6.34 Q: ERR command replconf not support for your account A: If the source side is an alicloud version, an error will be reported. For the master-slave version, you need to apply for replication permission on the console and paste it with the applied password source.password_raw Inside; For the cluster version, copy permission is not supported yet, so users can only use it rump Mode performs full scan migration, and incremental migration is not supported.