MMM Cluster Architecture Overview
Mmm (Master Master replication manager Mysql) is a set of flexible script program based on perl, which is used to monitor and migrate mysql replication, and manage the configuration of Mysql master replication (only one node is writable at the same time).
MMM high availability architecture description
Mmm Mond: monitor the process, take charge of all monitoring work, determine and process all node role activities. This script needs to be run on the supervisor.
Mmm agent D: the agent process running on each mysql server completes the monitoring probe work and performs simple remote service settings. This script needs to be run on the regulated machine.
Mmm? Control: a simple script that provides commands to manage the MMM? Mond process.
The supervision end of MySQL MMM will provide multiple virtual IPS (VIPs), including a writable VIP and multiple readable VIPs. Through the supervision management, these IPS will be bound to the available mysql. When one MySQL goes down, the supervision will migrate the VIPs to other mysql.
In the whole supervision process, you need to add related authorized users to mysql so that mysql can support the maintenance of the supervisor. Authorized users include an MMM monitor user and an MMM agent user.
Advantages and disadvantages of MMM
Advantages: high availability, good scalability, automatic failover, for primary and primary synchronization, only one database write operation is provided at the same time to ensure data consistency.
Disadvantages: the Monitor node is a single point, which can be highly available with Keepalived. It has requirements for the number of hosts and needs to achieve read-write separation, which is a challenge for the program.
Deployment of experimental environment
Server role IP address proxy name virtual IP Main server 1 192.168.142.131 db1 vip192.168.142.130 (master1) Primary server 2 192.168.142.134 DB2 (master2) From server 1 192.168.142.130 db3 vip192.168.142.251, 192.168.142.252 (slave1) From server 2 192.168.142.135 DB4 (slave2) Monitoring server 192.168.142.136 (monitor)
Step 1: install MySQL database on all four servers
1. Configure ALI cloud source, and then install EPEL release source
[root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo [root@localhost ~]# yum -y install epel-release [root@localhost ~]# yum clean all && yum makecache
2. Build local yum source
#mount this database [root@localhost ~]# yum -y install mariadb-server mariadb #Turn off firewall and security features [root@localhost ~]# systemctl stop firewalld.service [root@localhost ~]# setenforce 0 #Open database [root@localhost ~]# systemctl start mariadb.service
3. Modify ml main configuration file
[root@localhost ~]# vim /etc/my.cnf #Delete the first 9 lines and add the following [mysqld] log_error=/var/lib/mysql/mysql.err #File location for the error log log=/var/lib/mysql/mysql_log.log #File location for access logs log_slow_queries=/var/lib/mysql_slow_queris.log #File location of man log binlog-ignore-db=mysql,information_schema #Mysql, information ﹣ schema does not generate binary log files character_set_server=utf8 #character set log_bin=mysql_bin #Binary log file function on server_id=1 #Different host IDs are different log_slave_updates=true #Authorization synchronization sync_binlog=1 #Binary log file function on auto_increment_increment=2 #Self increment auto_increment_offset=1 #Starting value [root@localhost ~]# systemctl restart mariadb.service [root@localhost ~]# netstat -natp | grep 3306
4. Copy the configuration file to the other three database servers, and pay attention to modifying the server ID
[root@localhost ~]# scp /etc/my.cnf root@192.168.142.134:etc/
5. Enter the database and view the log file information
[root@localhost ~]# mysql #View log file name and location values MariaDB [(none)]> show master status; +------------------+----------+--------------+--------------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+--------------------------+ | mysql_bin.000001 | 245| | mysql,information_schema | +------------------+----------+--------------+--------------------------+ 1 row in set (0.00 sec)
6. Grant access to each other on m1 and m2, and authorize synchronization of logs
#Grant access to each other on m1 and m2 MariaDB [(none)]> grant replication slave on *.* to 'replication'@'192.168.142.%' identified by '123456'; #Specify the log file name and location parameter of m2 on m1 MariaDB [(none)]> change master to master_host='192.168.142.134',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245; #Specify the log file name and location parameter of m1 on m2 MariaDB [(none)]> change master to master_host='192.168.142.131',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245;
7. Turn on the synchronization function on m1
MariaDB [(none)]> start slave;
8. Check the synchronization status. You need to see Yes for both primary servers
MariaDB [(none)]> show slave status\G; Slave_IO_Running: Yes Slave_SQL_Running: Yes
9. Create database on m1
MariaDB [(none)]> create database school;
10. View synchronous database on m2
MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | school | | test | +--------------------+ 5 rows in set (0.00 sec)
11. Do it on two slaves - note the change of log file and location parameters (both point to m1)
MariaDB [(none)]> change master to master_host='192.168.142.131',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245;
12. Turn on the synchronization function
MariaDB [(none)]> start slave; #Viewing synchronization data information from the server MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | school | | test | +--------------------+ 5 rows in set (0.00 sec)
13. Install MMM related software on four servers
[root@localhost ~]# yum -y install mysql-mmm*
14. Configure the mmm_common.conf configuration file
[root@localhost ~]# vim /etc/mysql-mmm/mmm_common.conf <host default> #Change network card to ens33 cluster_interface ens33 pid_path /run/mysql-mmm-agent.pid bin_path /usr/libexec/mysql-mmm/ replication_user replicantion #Change authorization password replication_password 123456 agent_user mmm_agent #Change agent authorization password agent_password 123456 </host> #Specify the roles and IP addresses of the four servers <host db1> ip 192.168.142.131 mode master peer db2 </host> <host db2> ip 192.168.142.134 mode master peer db1 </host> <host db3> ip 192.168.142.130 mode slave </host> <host db4> ip 192.168.142.135 mode slave </host> #Set primary server virtual IP <role writer> hosts db1, db2 ips 192.168.142.250 mode exclusive </role> #Set virtual IP from server <role reader> hosts db3, db4 ips 192.168.142.251, 192.168.142.252 mode balanced </role>
15. Copy the configuration file from m1 to the other three servers
[root@localhost ~]# scp mmm_common.conf/mmm_common.conf root@192.168.142.134:/etc/mysql-mmm/mmm_common.conf [root@localhost ~]# scp mmm_common.conf/mmm_common.conf root@192.168.142.130:/etc/mysql-mmm/mmm_common.conf [root@localhost ~]# scp mmm_common.conf/mmm_common.conf root@192.168.142.135:/etc/mysql-mmm/mmm_common.conf
Step 2: configure monitor monitoring server
1. Install EPEL release source and MMM related software
[root@localhost ~]# systemctl stop firewalld.service [root@localhost ~]# setenforce 0 [root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo [root@localhost ~]# yum -y install epel-release [root@localhost ~]# yum clean all && yum makecache [root@localhost ~]# yum -y install mysql-mmm*
2. Copy the configuration file from m1 to the monitoring server
[root@localhost ~]# scp mmm_common.conf root@192.168.142.134:/etc/mysql-mmm/
3. Configure the mmm_common.conf configuration file
[root@localhost ~]# vim /etc/mysql-mmm/mmm_mon.conf <monitor> ip 127.0.0.1 pid_path /run/mysql-mmm-monitor.pid bin_path /usr/libexec/mysql-mmm status_path /var/lib/mysql-mmm/mmm_mond.status #IP addresses to four servers ping_ips 192.168.142.131,192.168.142.134,192.168.142.130,192.168.142.135 auto_set_online 10
4. Authorize MMM agent on all database servers
MariaDB [(none)]> grant super, replication client, process on *.* to 'mmm_agent'@'192.168.142.%' identified by '123456'; Query OK, 0 rows affected (0.02 sec)
5. Authorize MMM ﹣ moniter on all database servers
MariaDB [(none)]> grant replication client on *.* to 'mmm_monitor'@'192.168.18.%' identified by '123456'; Query OK, 0 rows affected (0.02 sec)
6. Refresh the database
MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.02 sec)
7. Set agent name on all database servers
[root@localhost ~]# vim /etc/mysql-mmm/mmm_agent.conf #Modify agent profile #Default name in m1 this db1 #Name in m2 this db2 #Name in s1 this db3 #Name in s2 this db4
8. Start the agent function on all database servers and set the start-up mode
#Start agent function [root@localhost ~]# systemctl start mysql-mmm-agent.service #Set power on self start [root@localhost ~]# systemctl enable mysql-mmm-agent.service
9. Enable the monitoring service on the monitor server
[root@localhost mysql-mmm]# systemctl start mysql-mmm-monitor.service
10. Check the status of each node
[root@localhost ~]# mmm_control show db1(192.168.142.131) master/ONLINE. Roles: writer(192.168.142.250) db2(192.168.142.134) master/ONLINE. Roles: db3(192.168.142.130) slave/ONLINE. Roles: reader(192.168.142.252) db4(192.168.142.135) slave/ONLINE. Roles: reader(192.168.142.251)
11. Check the status of all servers
[root@localhost ~]# mmm_control checks all db4 ping [last change: 2019/11/25 18:23:03] OK db4 mysql [last change: 2019/11/25 18:23:03] OK db4 rep_threads [last change: 2019/11/25 18:23:03] OK db4 rep_backlog [last change: 2019/11/25 18:23:03] OK: Backlog is null db2 ping [last change: 2019/11/25 18:23:03] OK db2 mysql [last change: 2019/11/25 18:59:01] OK db2 rep_threads [last change: 2019/11/25 18:59:01] OK db2 rep_backlog [last change: 2019/11/25 18:59:01] OK: Backlog is null db3 ping [last change: 2019/11/25 18:23:03] OK db3 mysql [last change: 2019/11/25 18:59:01] OK db3 rep_threads [last change: 2019/11/25 18:59:01] OK db3 rep_backlog [last change: 2019/11/25 18:59:01] OK: Backlog is null db1 ping [last change: 2019/11/25 18:23:03] OK db1 mysql [last change: 2019/11/25 18:59:01] OK db1 rep_threads [last change: 2019/11/25 18:59:01] OK db1 rep_backlog [last change: 2019/11/25 18:59:01] OK: Backlog is null
Step 3: fault test
1. Simulate m1 server downtime and stop service
[root@localhost ~]# systemctl stop mariadb.service
2. When m1 server goes down, m2 receives virtual IP and continues to provide services
[root@localhost ~]# mmm_control show db1(192.168.142.131) master/ONLINE. Roles: db2(192.168.142.134) master/ONLINE. Roles: writer(192.168.142.250) db3(192.168.142.130) slave/ONLINE. Roles: reader(192.168.142.252) db4(192.168.142.135) slave/ONLINE. Roles: reader(192.168.142.251)
3. Simulate s1 server downtime and stop service
[root@localhost ~]# systemctl stop mariadb.service
4. When s1 server goes down, s2 receives virtual IP and continues to provide services
[root@localhost ~]# mmm_control show db1(192.168.142.131) master/ONLINE. Roles: writer(192.168.142.250) db2(192.168.142.134) master/ONLINE. Roles: db3(192.168.142.130) slave/HARD_OFFLINE. Roles: db4(192.168.142.135) slave/ONLINE. Roles: reader(192.168.142.251), reader(192.168.142.252
5. Authorize login for monitoring server address on m1 server
MariaDB [(none)]> grant all on *.* to 'root'@'192.168.142.136' identified by '123456'; Query OK, 0 rows affected (0.00 sec)
6. Install the database client on the monitoring server
[root@localhost ~]# yum -y install mariadb
7. Log in the database with the specified user on the monitoring server and create data information
[root@localhost ~]# mysql -u root -p -h 192.168.142.250 Enter password: #Just enter the password #Create a database MariaDB [(none)]> create database BDQN; Query OK, 1 row affected (0.01 sec)
8. Data information synchronized in time can be viewed on all databases
MariaDB [(none)]> show databases; #view the database +--------------------+ | Database | +--------------------+ | information_schema | | BDQN | #BDQN database synchronized to | mysql | | performance_schema | | school | | test | +--------------------+ 6 rows in set (0.00 sec)