mysql master-slave read-write separation practice

Posted by kenyabob on Sun, 01 Dec 2019 08:02:07 +0100

Environmental Science

  • mysql-5.7.19
  • Centos7
  • master: 192.168.111.64
  • slave: 192.168.111.66

Master slave configuration

master configuration

## GTID
server_id = 100  # Server id. the slave database should be larger than the master database, and be unique
gtid_mode = on    #Turn on gtid mode
enforce_gtid_consistency = on    #Force gtid consistency. It is not supported for a specific create table after it is enabled
log-bin = mysql-bin    #Open binary log
binlog_format = row    #The default is mixed mode, which is changed to row replication for data consistency
log-slave-updates = 1    #Only the slave binlog records the operation log of the master database synchronization
skip_slave_start=1    #Skip slave replication thread

slave configuration

## Set the server? ID, be sure to be unique
server_id=101
log-bin = mysql-bin
binlog_format = row
log-slave-updates = 1
gtid_mode = on
enforce_gtid_consistency = on
skip_slave_start=1
# Set from library to read-only
read_only=on

Restart database

service mysqld restart

Create user

# User synchronization from library
GRANT SELECT,RELOAD,SHOW DATABASES,LOCK TABLES,EVENT,REPLICATION CLIENT  ON *.* TO 'repl_user'@'192.168.111.64' IDENTIFIED BY '123456';

# Create a read-only user who reads data from the library
GRANT Select ON *.* TO 'reader'@'192.168.111.%'  IDENTIFIED BY "123456";

Data export

# Operate on 192.169.111.66
nohup mysqldump --single-transaction --master-data=2 --triggers --routines --all-databases --events -h192.168.111.64 -u'root' -p'password' > /home/backup.sql &

Because of the large amount of data (about 30g for export), use the nohup [commit] & use the background export to prevent the export time from being too long. You can use the jobs -l command to view

Data can also be exported directly to the master database server, and then transferred to the slave database server through scp backup.sql 192.168.111.66:/home command

Import from library data

Connect to mysql import

  • Landing mysql
# Connect to database
mysql -uroot -p'123456';
  • Execute import sql command
source /home/backup.sql

Command direct import

nohup mysql -uroot -p'123456' < /home/backup.sql &

The second method is recommended, especially when the data volume is large

Set master-slave connection on 192.168.111.66

  • Connect mysql
mysql -uroot -p'123456'
  • Set up
# Set up
CHANGE MASTER TO MASTER_HOST='192.168.111.64', MASTER_PORT=3306,MASTER_USER='repl_user',MASTER_PASSWORD='123456',MASTER_AUTO_POSITION=1;

# Open mastermind
start slave;

# View master-slave status
show slave status \G;

Common commands:

  • Stop Sync: stop slave
  • Reset slave status: reset slave
  • Reset master: reset master

Topics: Linux MySQL Database SQL mysqldump