Introduction to MHA:
(1) Introduction
At present, MySQL is a relatively mature solution for high availability. It was developed by youshimaton, a Japanese DeNA company (now working for Facebook), and is an excellent set of high availability software for failover and master-slave promotion in MySQL high availability environment.During the MSQL failover process, MHA can automatically complete the database failover operation within 0-30 seconds, and during the failover process, MHA can ensure the data consistency to the maximum extent to achieve a true high availability.
(2) The software consists of two parts:
MHA Manager (management node) and MHA Node (data node).MHA Manager can be deployed on a separate machine to manage multiple master-slave clusters or on a single slave node.MHA Node runs on each MySQL server, and the MHA Manager periodically detects the master node in the cluster. When the master fails, it automatically upgrades the slave of the latest data to the new master, and then redirects all other slaves to the new master.The entire failover process is fully transparent to the application.
(3) How it works:
1. During the automatic failover of the HA, MHA tries to save the binary logs from the downtime master server to ensure that the data is not lost to the maximum extent, but this is not always possible.For example, if the primary server hardware fails or is not accessible through ssh, MHA cannot save binary logs, failover only and lose the latest data.Using MySQL 5.5 semi-synchronous replication can greatly reduce the risk of data loss.MHA can be combined with semi-synchronous replication.If only one slave has received the latest binary logs, MHA can apply the latest binary logs to all other slave servers, thereby ensuring data consistency for all nodes.
2. Order:
(1) Binary log events are saved from the downtime master;
(2) Identify slave s with the latest updates;
(3) Applying the relay log of differences to other slave s;
(4) Apply binlog events saved from master;
Promote a slave to a new master;
Make other slave s connect to the new master for replication
Experimental environment
master(192.168.13.129) mha4mysql-node| slave1(192.168.13.130) mha4mysql-node | slave2(192.168.13.131) mha4mysql-node | manager(192.168.13.128) mha4mysql-manager, mha4mysql-node|
1. Install mysql database on three master-slave servers
#Install compile lazy environment [root@localhost ~] yum -y install gcc gcc-c++ ncurses ncurses-devel bison perl-Module-Install cmake [root@localhost ~] mount.cifs //192.168.100.3/mha/mnt ##Mount package Password for root@//192.168.100.3/mha: [root@localhost ~] cd /mnt [root@localhost mnt] tar zxvf cmake-2.8.6.tar.gz -C /opt ##Install cmake compiler software [root@localhost mnt] cd /opt/cmake-2.8.6/ [root@localhost cmake-2.8.6] ./configure ##To configure [root@localhost cmake-2.8.6] gmake && gmake install ##Compile Installation #Install mysql database [root@localhost cmake-2.8.6]# cd /mnt [root@localhost mnt]# tar zxvf mysql-5.6.36.tar.gz -C /opt ##Unzip MySQL #Compile mysql [root@localhost mnt]# cd /opt/mysql-5.6.36/ [root@localhost mysql-5.6.36]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \ -DDEFAULT_CHARSET=utf8 \ ##Specify Character Set -DDEFAULT_COLLATION=utf8_general_ci \ ##Specify Character Set Default -DWITH_EXTRA_CHARSETS=all \ ##Associate all additional character sets -DSYSCONFDIR=/etc ##Profile directory #install [root@localhost mysql-5.6.36]# make && make install ##Compile Installation #Setting environment variables [root@localhost mysql-5.6.36]# cp support-files/my-default.cnf /etc/my.cnf ##Copy Profile [root@localhost mysql-5.6.36]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld ##Copy startup script [root@localhost mysql-5.6.36]# chmod +x /etc/rc.d/init.d/mysqld ##Give Execution Permission [root@localhost mysql-5.6.36]# chkconfig --add mysqld ##Add to service management [root@localhost mysql-5.6.36]# echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile ##Modify environment variables [root@localhost mysql-5.6.36]# source /etc/profile ##Refresh Xinhua Ning Variable #Create mysql database and authorize [root@localhost mysql-5.6.36]# groupadd mysql ##Create Group [root@localhost mysql-5.6.36]# useradd -M -s /sbin/nologin mysql -g mysql ##Create System User [root@localhost mysql-5.6.36]# chown -R mysql.mysql /usr/local/mysql ##Modify Group Owner [root@localhost mysql-5.6.36]# mkdir -p /data/mysql ##Create Data Catalog #Initialize database [root@localhost mysql-5.6.36]# /usr/local/mysql/scripts/mysql_install_db \ --basedir=/usr/local/mysql \ ##File directory --datadir=/usr/local/mysql/data \ ##Data directory --user=mysql ##user
2, Modify the main configuration file of mysql: /etc/my.cnf, note that the server-id s of the three servers cannot be the same
---Configure the master server: [root@master mysql-5.6.36]# vim /etc/my.cnf [mysql] server-id = 1 #Open Binary Log log_bin = master-bin #Allow synchronization from server log-slave-updates = true ---Configure slave server 1: [root@slave1 mysql-5.6.36]# vim /etc/my.cnf [mysql] server-id = 2 #Open Binary Log log_bin = master-bin #Synchronize using relay logs relay-log = relay-log-bin relay-log-index = slave-relay-bin.index ---Configure slave server 2: [root@slave2 mysql-5.6.36]# vim /etc/my.cnf [mysql] server-id = 3 log_bin = master-bin relay-log = relay-log-bin relay-log-index = slave-relay-bin.index
3, three servers start mysql service
#Create two soft links on three servers [root@master mysql-5.6.36]# ln -s /usr/local/mysql/bin/mysql /usr/sbin/ [root@master mysql-5.6.36]# ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/ #mysql start [root@master mysql-5.6.36]# /usr/local/mysql/bin/mysqld_safe --user=mysql & #Turn off firewalls and security features [root@master mysql-5.6.36]# systemctl stop firewalld.service [root@master mysql-5.6.36]# setenforce 0
4. Configure Mysql master-slave synchronization (one master-two slave) to authorize two users on all database nodes
[root@master mysql-5.6.36]# Mysql-u root-p //Enter database mysql> grant replication slave on *.* to 'myslave'@'192.168.13.%' identified by '123'; ##Synchronize user myslave from database mysql> grant all privileges on *.* to 'mha'@'192.168.13.%' identified by 'manager'; ##manager uses monitoring users mysql> flush privileges; //Refresh Database #Add the following authorizations (not required in theory) to the database as hostname authorizations (checked by MHA as hostname) mysql> grant all privileges on *.* to 'mha'@'master' identified by 'manager'; mysql> grant all privileges on *.* to 'mha'@'slave1' identified by 'manager'; mysql> grant all privileges on *.* to 'mha'@'slave2' identified by 'manager';
5. View binaries and synchronization points on master server
mysql> show master status; +-------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +-------------------+----------+--------------+------------------+-------------------+ | master-bin.000001 | 1213 | | | | +-------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec)
6, set up synchronization on two slave servers
#Synchronize the logs of the primary server by executing the following commands on both slave servers mysql> change master to master_host='192.168.13.129',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1213; mysql> start slave; //Open slave mysql> show slave status\G; //View slave Slave_IO_Running: Yes Slave_SQL_Running: Yes mysql> set global read_only=1; mysql> flush privileges; //Refresh Database
7. Install the MHA lazy environment on all servers, first install the epel source
[root@master mysql-5.6.36]# yum install epel-release --nogpgcheck -y ##Installation Source [root@master mysql-5.6.36]# yum install -y perl-DBD-MySQL \ ##For MySQL perl-Config-Tiny \ ##configuration file perl-Log-Dispatch \ ##Journal perl-Parallel-ForkManager \ ##Multithread Management perl-ExtUtils-CBuilder \ ##Extension Tools perl-ExtUtils-MakeMaker \ perl-CPAN ##Library
8, install node on all servers
#Unzip Install node [root@manager ~]# cd ~ [root@manager ~]# tar zxvf /mnt/mha4mysql-node-0.57.tar.gz [root@manager ~]# cd mha4mysql-node-0.57/ [root@manager mha4mysql-node-0.57]# perl Makefile.PL ##Compile perl [root@manager mha4mysql-node-0.57]# make && make install ##Compile Installation
9, install manager on the manger server (note: you must install the node component to install the manager component)
#Close Firewall [root@manager ~]# systemctl stop firewalld.service [root@manager ~]# setenforce 0 #Unzip and install manager [root@manager ~]# cd ~ [root@manager ~]# tar zxvf /mnt/mha4mysql-manager-0.57.tar.gz [root@manager ~]# cd mha4mysql-manager-0.57/ [root@manager mha4mysql-manager-0.57]# perl Makefile.PL ##perl compilation [root@manager mha4mysql-manager-0.57]# make && make install ##Compile Installation
Several tools are generated under the usr/local/bin directory after the manager server is installed:
- masterha_check_repl checks mysql replication status - masterha_master_monitor checks if master is down - masterha_check_ssh Checks the SHH configuration of MHA - masterha_master_switch controls failover - masterha_check_status checks the current MHA running state - masterha_conf_host adds or deletes configured server information - masterha_stop closes manager - script for master ha_manager to start Manager
Several scripts that will be generated under/usr/local/bin after node installation (usually triggered by MHA Manager scripts, no human action required)
- apply_diff_relay_logs: Identify differences in relay log events and apply their differences to other slave s; - save_binary_logs: Save and copy master's binary logs; - filter_mysqlbinlog: Remove unnecessary ROLLBACK events (MHA no longer uses this tool); - purge_relay_logs: Clears relay logs (does not block SQL threads);
10, Configure Password-Free Access
##Configure password-free authentication for all database nodes in manager [root@manager ~]# ssh-keygen -t rsa ##Generate key Enter file in which to save the key (/root/.ssh/id_rsa): ##Enter Enter passphrase (empty for no passphrase): ##Enter Enter same passphrase again: ##Enter [root@manager ~]# ssh-copy-id 192.168.13.129 ##Upload to another server Are you sure you want to continue connecting (yes/no)? yes root@192.168.13.129's password: ##Enter the password for the 129 server [root@manager ~]# ssh-copy-id 192.168.13.130 [root@manager ~]# ssh-copy-id 192.168.13.131 ##Password-free authentication configured on master to database nodes slave1 and slave2 [root@master ~]# ssh-keygen -t rsa [root@master ~]# ssh-copy-id 192.168.13.130 [root@master ~]# ssh-copy-id 192.168.13.131 ##Password-free authentication configured on slave1 to database nodes master'and slave2 [root@slave1 ~]# ssh-keygen -t rsa [root@slave1 ~]# ssh-copy-id 192.168.13.129 [root@slave1 ~]# ssh-copy-id 192.168.13.131 ##Password-free authentication configured to database nodes slave1 and master on slave2 [root@slave2 ~]# ssh-keygen -t rsa [root@slave2 ~]# ssh-copy-id 192.168.13.129 [root@slave2 ~]# ssh-copy-id 192.168.13.130
11, Configure MHA, copy relevant scripts to the / usr/local directory on the manager node, and configure
[root@manager ~]# cp -ra /root/mha4mysql-manager-0.57/samples/scripts/ /usr/local/bin/ ##Copy script to/usr/local [root@manager ~]# ls mha4mysql-manager-0.57/samples/scripts/ ##Generate four executable scripts master_ip_failover: When switching automatically VIP Managed scripts; master_ip_online_change: When switching Online VIP Management; power_manager: The script to shut down the host after a failure occurs; send_report: A script to send an alarm after failover; ##Copy the VIP-managed script for auto-switch to/usr/local/bin/directory: [root@manager ~]# cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin/ [root@manager ~]# vim /usr/local/bin/master_ip_failover ##Delete everything and rewrite the master_ip_failover script #!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '192.168.13.100'; my $brdc = '192.168.13.255'; my $ifdev = 'ens33'; my $key = '1'; my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip"; my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down"; my $exit_code = 0; #my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;"; #my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; exit 0; } else { &usage(); exit 1; } } sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } # A simple system call that disable the VIP on the old_master sub stop_vip() { `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
12, create the MHA software directory on the manager node and copy the configuration file
[root@manager ~]# mkdir /etc/masterha [root@manager ~]# cp /root/mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/masterha/ #Edit Profile [root@manager ~]# vim /etc/masterha/app1.cnf [server default] #manager Profile manager_log=/var/log/masterha/app1/manager.log #manager log manager_workdir=/var/log/masterha/app1 #Master saves the binlog in the same location as the bilog configured in master master_binlog_dir=/usr/local/mysql/data #Set the switch script for automatic failover.That's the script above master_ip_failover_script=/usr/local/bin/master_ip_failover #Set switch script for manual switch master_ip_online_change_script=/usr/local/bin/master_ip_online_change #This password is the one you created earlier to monitor users password=manager remote_workdir=/tmp #Set Replication User Password repl_password=123 #Set up users for replication users repl_user=myslave #Set script to alert when a switch occurs secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.13.130 -s 192.168.13.131 #Set Up Failure Shutdown Failure Script Host shutdown_script="" #Set ssh login user name ssh_user=root #Set up monitoring users user=mha [server1] hostname=192.168.13.129 port=3306 [server2] candidate_master=1 #Set as candidate master, if this parameter is set, the sender master slave will be upgraded from the library to the master library after the sender slave switch hostname=192.168.13.130 check_repl_delay=0 port=3306 [server3] hostname=192.168.13.131 port=3306
13, test ssh passwordless authentication, if normal will eventually output success, check build health
[root@manager ~]# masterha_check_ssh -conf=/etc/masterha/app1.cnf .... [root@manager ~]# masterha_check_repl -conf=/etc/masterha/app1.cnf
14, configure virtual ip on master
[root@master mha4mysql-node-0.57]# /sbin/ifconfig ens33:1 192.168.13.100/24
15, start mha on manager server
[root@manager scripts]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 & ##Looking at the MHA status, you can see that the current master is a mysql node [root@manager scripts]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:43036) is running(0:PING_OK), master:192.168.13.129
16, Fault Simulation
[root@manager scripts]# tailf /var/log/masterha/app1/manager.log ##Start Monitoring Observation Logging
##Turn off master server [root@master mha4mysql-node-0.57]# pkill -9 mysql
You can see the state of the slave library, and vip switches to one of the slave libraries:
[root@slave1 mha4mysql-node-0.57]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.13.130 netmask 255.255.255.0 broadcast 192.168.13.255 ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.13.100 netmask 255.255.255.0 broadcast 192.168.13.255 ether 00:0c:29:af:94:06 txqueuelen 1000 (Ethernet)
At this point, mysql is installed on the manager, and the client can also connect to the database via virtual ip:
##Take ownership on vip's database server mysql> grant all on *.* to 'root'@'%' identified by 'abc123'; Query OK, 0 rows affected (0.00 sec) ##Log on with virtual ip on the client [root@manager ~]# mysql -uroot -h 192.168.13.100 -p ##Specify virtual ip Enter password: ##Input password MySQL [(none)]>