1, Overview of MySQL MMM cluster technology
summary:
MMM (Master Master replication manager for mysql) is a script program that supports dual master failover and dual master daily management. MMM is developed in Perl and is mainly used to monitor and manage mysql master master replication. It can be said to be mysql master replication manager.
Although it is called dual master replication, in business, only one master is allowed to write at the same time, and the other alternative master provides partial read services to accelerate the preheating of the alternative master at the time of master master master switching. It can be said that MMM script program realizes the function of failover on the one hand, and its internal additional tool script can also realize the read load balancing of multiple slave s on the other hand.
Mmm provides automatic and manual methods to remove the virtual ip of servers with high replication delay in a group of servers. At the same time, it can also back up data and realize data synchronization between two nodes. Because MMM cannot completely guarantee data consistency, mmm is suitable for scenarios that do not require high data consistency, but want to ensure business availability to the greatest extent.
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-YsQnMZeS-1624004441798)(en-resource://database/847:1)]
[the transfer of external chain pictures fails. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (IMG pfexuqvz-1624004441801) (EN- resource://database/849:1 )]
2, Advantages and disadvantages of MySQL mmm
[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-3SwrtKsN-1624004441802)(en-resource://database/851:1)]
3, MySQL MMM internal working architecture
Process type:
mmm_mond: monitor the process, take charge of all monitoring work, decide and handle all node role activities. This script needs to be run on the supervisor
mmm_agentd: the agent process running on each mysql server (Master and Slave) to complete the monitoring probe work and execute simple remote service settings. This script needs to be run on the supervised machine
mmm_control: a simple script that provides management mmm_ Command of the Mond process
Work structure:
[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-JDDB8Kcw-1624004441805)(en-resource://database/853:1)]
working principle:
The supervisor of MySQL MMM will provide multiple virtual IPS (VIPs), including one writable VIP and multiple readable VIPs
Through regulatory management, these IP S will be bound to the available mysql
When a MySQL goes down, the supervisor will migrate the VIP to other mysql
4, Case: build MySQL MMM + MySQL 5.6 dual master high availability cluster
Case environment:
Prepare five MySQL servers
namely:
192.168.1.1 | master1 |
---|---|
192.168.1.2 | master2 |
192.168.1.3 | slave1 |
192.168.1.4 | slave1 |
192.168.1.5 | monitor |
192.168.1.6 | client |
Case steps:
Deploy ntp service and domain name resolution of master1 node:
hostnamectl set-hostname master1
cat <<END >>/etc/hosts 192.168.1.1 master1 192.168.1.2 master2 192.168.1.3 slave1 192.168.1.4 slave2 192.168.1.5 monitor END
Configure Master 2, slave1, slave2 and monitor nodes to synchronize ntp time and domain name resolution (only master 2 single host configuration is listed here)
yum -y install ntp sed -i '/^server/s/^/#/g' /etc/ntp.conf grep "^#server" /etc/ntp.conf
cat <<END >>/etc/ntp.conf server 127.127.1.0 fudge 127.127.1.0 stratum 8 END
systemctl start ntpd systemctl enable ntpd netstat -anptu | grep ntpd
yum -y install ntpdate ntpdate 192.168.1.1
The above operations are required for each server
Configure master1 for dual master replication
master1:
vi /etc/my.cnf server-id=1 log-bin=mysql-bin log-slave-updates sync_binlog=1 auto_increment_increment=2 auto_increment_offset=1 relay-log=relay1-log-bin relay-log-index=slave-relay1-bin.index
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-JH96pIUp-1624004441806)(en-resource://database/855:1)]
Notes:
sync_binlog=1 ##Every time the host commits a transaction, it synchronizes the contents of the binary log to the disk, so even if the server crashes, it will write the time to the log; auto_increment_increment=2 ##The following two parameters are used in primary replication to stagger value-added and prevent key value conflicts
systemctl restart mysqld mysql -uroot -p123456
master1:
mysql> show master status; +------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+-------------------+ | mysql-bin.000001 | 154 | | | | +------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec) mysql> grant replication slave on *.* to 'master'@'192.168.100.%' identified by '123456'; Query OK, 0 rows affected, 1 warning (0.01 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> exit Bye
Configure master2 for dual master replication
master2:
cat <<END >>/etc/my.cnf server-id=2 log-bin=mysql-bin log-slave-updates sync_binlog=1 auto_increment_increment=2 auto_increment_offset=1 relay-log=relay2-log-bin relay-log-index=slave-relay2-bin.index END
systemctl restart mysqld mysql -uroot -p123456
change master to master_host='192.168.100.1',master_user='master',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=154; Query OK, 0 rows affected, 2 warnings (0.01 sec) mysql> start slave; Query OK, 0 rows affected (0.00 sec) mysql> show slave status\G; mysql> exit Bye
[external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-7HCoje8i-1624004441807)(en-resource://database/857:1)]
master2:
mysql> grant replication slave on *.* to 'master'@'192.168.1.%' identified by '123456'; Query OK, 0 rows affected, 1 warning (0.01 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> show master status; +------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+-------------------+ | mysql-bin.000001 | 611 | | | | +------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec) mysql> exit Bye
Operation on master1:
mysql -uroot -p123456 mysql> change master to master_host='192.168.1.2',master_user='master',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=613; Query OK, 0 rows affected, 2 warnings (0.02 sec) mysql> start slave; Query OK, 0 rows affected (0.00 sec) mysql> show slave status\G;
[external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-tua9IbqH-1624004441808)(en-resource://database/859:1)]
mysql> create database aaa; Query OK, 1 row affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | aaa | | bb | | mysql | | performance_schema | | sys | | zps | +--------------------+ 7 rows in set (0.00 sec)
Operation on master2:
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | aaa | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)
Configure slave1 and slave2 to realize master-slave replication (only slave1 single host configuration is listed here)
[root@slave1 ~]# cat <<END >>/etc/my.cnf server-id=4 relay-log=relay4-log-bin relay-log-index=slave-relay4-bin.index END
[root@slave1 ~]# systemctl restart mysqld [root@slave1 ~]# mysql -uroot -p123456 mysql> change master to master_host='192.168.1.1',master_user='master',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=3687; Query OK, 0 rows affected, 2 warnings (0.02 sec) mysql> start slave; Query OK, 0 rows affected (0.01 sec) mysql> show slave status\G;
[root@slave2 ~]# cat <<END >>/etc/my.cnf server-id=5 relay-log=relay5-log-bin relay-log-index=slave-relay5-bin.index END
[root@slave2 ~]# systemctl restart mysqld [root@slave2 ~]# mysql -uroot -p123456 mysql> change master to master_host='192.168.1.1',master_user='master',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=3687; Query OK, 0 rows affected, 2 warnings (0.02 sec) mysql> start slave; Query OK, 0 rows affected (0.01 sec) mysql> show slave status\G;
Install MySQL MMM software programs on the five nodes of master1, master2, slave1 and slave2 monitor respectively, and configure the agent to specify the node name of the machine (only the configuration of a single master of master1 is listed here)
Configure local yum source and upload Mysql_mmm_rpm folder to / root of master1
master1:
[root@master1 ~]# ls anaconda-ks.cfg boost_1_59_0.tar.bz2 mysql-5.7.12.tar.gz Mysql_mmm_rpm vi /etc/yum.repos.d/centos.repo [123] name=123 baseurl=file:///mnt enabled=1 gpgcheck=0 [Mysql_mmm_rpm] name=Mysql_mmm_rpm baseurl=file:///root/Mysql_mmm_rpm enabled=1 gpgcheck=0
scp -r Mysql_mmm_rpm root@192.168.1.2:/root/ scp -r Mysql_mmm_rpm root@192.168.1.3:/root/ scp -r Mysql_mmm_rpm root@192.168.1.4:/root/ scp -r Mysql_mmm_rpm root@192.168.1.5:/root/
cd /etc/yum.repos.d/ scp -r centos.repo root@192.168.1.2:/etc/yum.repos.d/ scp -r centos.repo root@192.168.1.3:/etc/yum.repos.d/ scp -r centos.repo root@192.168.1.4:/etc/yum.repos.d/ scp -r centos.repo root@192.168.1.5:/etc/yum.repos.d/
yum -y install mysql-mmm mysql-mmm-agent mysql-mmm-tools
In master1, master2, slave1 and slave2 respectively
vi /etc/mysql-mmm/mmm_agent.conf # The 'this' variable refers to this server. Proper operation requires # that 'this' server (db1 by default), as well as all other servers, have the # proper IP addresses set in mmm_common.conf. this db1 :wq
On the master1 node, authorize the monitor node to connect to the database cluster
mysql -uroot -p123456 mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.1.%' identified by 'monitor'; Query OK, 0 rows affected (0.00 sec) mysql> grant super,replication client,process on *.* to 'mmm_agent'@'192.168.1.%' identified by 'agent'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> exit
Permission notes:
replication client: permission is used to execute commands such as show master status. These commands are used to view the replication status;
replication slave: it is used to connect the master library and read binary files from the library to realize replication;
super: kill the process connected in mysql, set global variables, and reset the permissions of master-slave configuration; Process: have the permission to view the currently running sql and explain the execution plan;
Install the MySQL MMM software program on the monitor node
[root@monitor ~]# yum -y install mysql-mmm mysql-mmm-tools mysql-mmm-monitor
Configure the mysql MMM configuration file on the monitor node and copy it to each mysql node;
[root@monitor ~]# vi /etc/mysql-mmm/mmm_common.conf active_master_role writer ##Specify the active role as the write role <host default> cluster_interface ens33 ##Hosted network card pid_path /var/run/mysql-mmm/mmm_agentd.pid bin_path /usr/libexec/mysql-mmm/ ##Executable path replication_user master ##mysql Cluster master-slave replication account replication_password 123456 agent_user mmm_agent ##monitor accounts connected to the cluster agent_password agent </host> <host db1> ip 192.168.1.1 mode master peer db2 ##Specify the opposite node name </host> <host db2> ip 192.168.1.2 mode master peer db1 </host> <host db3> ip 192.168.1.3 mode slave </host> <host db4> ip 192.168.1.4 mode slave </host> <role writer> hosts db1, db2 ips 192.168.1.100 mode exclusive ##A single master exists at the same time </role> <role reader> hosts db3, db4 ips 192.168.1.101, 192.168.1.102 mode balanced ##polling </role>
[root@monitor ~]# vi /etc/mysql-mmm/mmm_mon.conf include mmm_common.conf <monitor> ip 127.0.0.1 pid_path /var/run/mysql-mmm/mmm_mond.pid bin_path /usr/libexec/mysql-mmm status_path /var/lib/mysql-mmm/mmm_mond.status ping_ips 192.168.1.1, 192.168.1.2, 192.168.1.3, 192.168.1.4 ##Specify the ip addresses of all nodes listening auto_set_online 60 ##If the online time exceeds 60s, it is considered as down </monitor> <host default> monitor_user mmm_monitor ##Working users of monitor monitor_password monitor </host> debug 0
for i in 1 2 3 4;do scp /etc/mysql-mmm/mmm_common.conf root@192.168.1.$i:/etc/mysql-mmm/; done
Start the MySQL MMM service on the four nodes of master1, master2, slave1 and slave2 (only the configuration of a single master of master1 is listed here)
systemctl daemon-reload systemctl start mysql-mmm-agent netstat -utpln |grep mmm
Start the MySQL MMM service on the monitor node and view the cluster status
[root@monitor ~]# systemctl daemon-reload [root@monitor ~]# systemctl start mysql-mmm-monitor [root@monitor ~]# netstat -utpln |grep mmm tcp 0 0 127.0.0.1:9988 0.0.0.0:* LISTEN 15266/mmm_mond [root@monitor ~]# mmm_control show db1(192.168.1.1) master/ONLINE. Roles: writer(192.168.1.100) db2(192.168.1.2) master/ONLINE. Roles: db3(192.168.1.3) slave/ONLINE. Roles: reader(192.168.1.101) db4(192.168.1.4) slave/ONLINE. Roles: reader(192.168.1.102)
Client verification test read and write data
master1:
[root@master1 ~]# mysql -uroot -p123456 mysql> grant all on *.* to 'client'@'192.168.1.%' identified by '123456'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> exit
Client:
[root@client ~]# yum -y install mysql [root@client ~]# mysql -uclient -p123456 -h192.168.1.100 MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | aaa | | bb | | mysql | | performance_schema | | sys | | zps | +--------------------+ 7 rows in set (0.03 sec) [root@client ~]# mysql -uclient -p123456 -h192.168.1.101 MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | aaa | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.02 sec) MySQL [(none)]> exit
Stop the mysql service of master node 1 and test the cluster status
[root@master1 ~]# systemctl stop mysqld [root@monitor ~]# mmm_control show db1(192.168.1.1) master/HARD_OFFLINE. Roles: db2(192.168.1.2) master/ONLINE. Roles: writer(192.168.1.100) db3(192.168.1.3) slave/ONLINE. Roles: reader(192.168.1.101) db4(192.168.1.4) slave/ONLINE. Roles: reader(192.168.1.102) Notes: ONLINE. Roles Represents an online node; HARD_OFFLINE express ping Impassability and(perhaps)mysql Connection interruption,Will lead to hard_offline Status; admin_offline Is the status of manual offline; [root@monitor ~]# tail /var/log/mysql-mmm/mmm_mond.log 2021/06/18 02:57:41 FATAL Couldn't open status file '/var/lib/mysql-mmm/mmm_mond.status': Starting up without status information. 2021/06/18 02:58:42 FATAL State of host 'db4' changed from AWAITING_RECOVERY to ONLINE because of auto_set_online(60 seconds). It was in state AWAITING_RECOVERY for 60 seconds 2021/06/18 02:58:42 FATAL State of host 'db2' changed from AWAITING_RECOVERY to ONLINE because of auto_set_online(60 seconds). It was in state AWAITING_RECOVERY for 60 seconds 2021/06/18 02:58:42 FATAL State of host 'db3' changed from AWAITING_RECOVERY to ONLINE because of auto_set_online(60 seconds). It was in state AWAITING_RECOVERY for 60 seconds 2021/06/18 02:58:42 FATAL State of host 'db1' changed from AWAITING_RECOVERY to ONLINE because of auto_set_online(60 seconds). It was in state AWAITING_RECOVERY for 60 seconds 2021/06/18 03:19:04 FATAL State of host 'db1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK) [root@localhost ~]# mysql -uclient -p123456 -h192.168.1.100 MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | aaa | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.03 sec) MySQL [(none)]> exit Bye
Turn off the write function of slave1 and slave2 nodes to make them read-only
[root@slave1 ~]# cat <<END >>/etc/my.cnf read_only=1 END [root@slave2 ~]# cat <<END >>/etc/my.cnf read_only=1 END Notes: read_only=1 ##Open the read-only service of the database service. Note that only ordinary users with permissions lower than super are subject to this restriction;
Client access tests whether the slave node is writable
[root@localhost ~]# mysql -uclient -p123456 -h192.168.1.100 MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | aaa | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) MySQL [(none)]> create database sss; Query OK, 1 row affected (0.01 sec) MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | aaa | | mysql | | performance_schema | | sss | | sys | +--------------------+ 6 rows in set (0.00 sec) MySQL [(none)]> exit Bye
[root@localhost ~]# mysql -uclient -p123456 -h192.168.1.102 MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | aaa | | hhh | | mysql | | performance_schema | | sss | | sys | +--------------------+ 7 rows in set (0.02 sec) MySQL [(none)]> create database nnnn; ERROR 1290 (HY000): The MySQL server is running with the --read-only option so it cannot execute this statement MySQL [(none)]> exit
[root@slave1 ~]# systemctl stop mysqld [root@monitor ~]# mmm_control show db1(192.168.1.1) master/HARD_OFFLINE. Roles: db2(192.168.1.2) master/ONLINE. Roles: writer(192.168.1.100) db3(192.168.1.3) slave/HARD_OFFLINE. Roles: db4(192.168.1.4) slave/ONLINE. Roles: reader(192.168.1.101), reader(192.168.1.102) [root@master1 ~]# systemctl start mysqld [root@monitor ~]# mmm_control show db1(192.168.1.1) master/ONLINE. Roles: db2(192.168.1.2) master/ONLINE. Roles: writer(192.168.1.100) db3(192.168.1.3) slave/HARD_OFFLINE. Roles: db4(192.168.1.4) slave/ONLINE. Roles: reader(192.168.1.101), reader(192.168.1.102) [root@master2 ~]# systemctl stop mysqld [root@monitor ~]# mmm_control show db1(192.168.1.1) master/ONLINE. Roles: writer(192.168.1.100) db2(192.168.1.2) master/HARD_OFFLINE. Roles: db3(192.168.1.3) slave/HARD_OFFLINE. Roles: db4(192.168.1.4) slave/ONLINE. Roles: reader(192.168.1.101), reader(192.168.1.102) [root@slave1 ~]# systemctl start mysqld [root@monitor ~]# mmm_control show db1(192.168.1.1) master/ONLINE. Roles: writer(192.168.1.100) db2(192.168.1.2) master/ONLINE. Roles: db3(192.168.1.3) slave/ONLINE. Roles: reader(192.168.1.101) db4(192.168.1.4) slave/ONLINE. Roles: reader(192.168.1.102)