MySQL MHA high availability cluster deployment and failover
MySQL MHA
1, MHA concept
MHA (MasterHigh Availability) is a set of excellent software for failover and master-slave replication in MySQL high availability environment.
The emergence of MHA is to solve the problem of MySQL single point.
During MySQL failover, MHA can automatically complete the failover operation within 0-30 seconds.
MHA can ensure the consistency of data to the greatest extent in the process of failover, so as to achieve high availability in the real sense.
- Composition of MHA
- MHA Node (data node)
The MHA Node runs on each MySQL server.
- MHA Manager (management node)
MHA Manager can be deployed on an independent machine to manage multiple master slave clusters; It can also be deployed on a slave node.
The MHA Manager will periodically probe the master node in the cluster. When the master fails, it can automatically promote the slave of the latest data to the new master, and then point all other slave to the new master again. The entire failover process is completely transparent to the application.
- Characteristics of MHA
-
In the process of automatic failover, MHA tries to save binary logs from the down main server to ensure no data loss to the greatest extent
-
Using semi synchronous replication can greatly reduce the risk of data loss. If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave servers, so it can ensure the data consistency of all nodes
-
At present, MHA supports a master-slave architecture with at least three services, i.e. one master and two slaves
2, Build MySQL+MHA
Experimental environment
host | operating system | IP address |
---|---|---|
MHAmanager | CentOS7-3 | 192.168.86.60 |
mysql1 | CentOS7-3 | 192.168.86.10 |
mysql2 | CentOS7-3 | 192.168.86.20 |
mysql3 | CentOS7-3 | 192.168.86.30 |
- All servers, turn off the system firewall and security mechanism
systemctl stop firewalld systemctl disable firewalld setenforce 0
- Modify the hostname of master, Slave1 and Slave2 nodes
hostnamectl set-hostname mysql1 su - hostnamectl set-hostname mysql2 su - hostnamectl set-hostname mysql3 su -
- Modify the main configuration file / etc / my. Of the three MySQL servers cnf
master:
vim /etc/my.cnf [mysqld] server-id = 1 log_bin = master-bin log-slave-updates = true systemctl restart mysqld ln -s /usr/local/mysql/bin/mysql /usr/sbin/ ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/
slave (2 sets):
vim /etc/my.cnf server-id = 2 #If server id = 3, mysql3 is 3. The server IDs of the three servers cannot be the same log_bin = master-bin relay-log = relay-log-bin relay-log-index = slave-relay-bin.index systemctl restart mysqld ln -s /usr/local/mysql/bin/mysql /usr/sbin/ ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/
- Configure Mysql with one master and two slaves
All MySQL servers are authorized to use mysql
mysql -e "grant replication slave on *.* to 'myslave'@'192.168.86.%' identified by '123456';" mysql -e "grant all privileges on *.* to 'mha'@'192.168.86.%' identified by 'manager';" mysql -e "grant all privileges on *.* to 'mha'@'mysql1' identified by 'manager';" mysql -e "grant all privileges on *.* to 'mha'@'mysql2' identified by 'manager';" mysql -e "grant all privileges on *.* to 'mha'@'mysql3' identified by 'manager';"
View secondary system files and synchronization points in the master node
mysql -e "show master status;" Example: everyone's binary file name or offset may be different. Remember your own +-------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +-------------------+----------+--------------+------------------+-------------------+ | master-bin.000001 | 5090 | | | | +-------------------+----------+--------------+------------------+-------------------+
Perform synchronization at slave 2 node
mysql -e "change master to master_host='192.168.86.10',master_user='myslave',master_password='123456',master_log_file='master-bin.000001',master_log_pos=5090;" mysql -e "start slave;" mysql -e "show slave status\G" | awk '/Running:/{print}'
Slave1 and Slave2 nodes are set to read-only mode
mysql -e "set global read_only=1;"
- Master slave replication verification
Create dB from master server_ Test, which can also be seen from the server
Master slave replication verification completed
- Install MHA software
-
MHA dependent environment is installed on all servers. First install the epel source, which needs to be installed online
Then install the node component on all servers
mv /etc/yum.repos.d/repos.bak/CentOS-* /etc/yum.repos.d/ yum list yum install epel-release --nogpgcheck -y yum install -y perl-DBD-MySQL \ perl-Config-Tiny \ perl-Log-Dispatch \ perl-Parallel-ForkManager \ perl-ExtUtils-CBuilder \ perl-ExtUtils-MakeMaker \ perl-CPAN Package mha4mysql-node-0.57.tar.gz Put/opt Under the directory cd /opt tar zxvf mha4mysql-node-0.57.tar.gz cd mha4mysql-node-0.57 perl Makefile.PL make && make install
The version of each operating system is different, here CentOS7 . 4 version 0.57 must be selected.
The node component must be installed on all servers first, and then the manager component must be installed on the MHA manager node, because the manager depends on the node component.
- Install the manager component on the MHA manager node
Package mha4mysql-manager-0.57.tar.gz Put/opt Under the directory cd /opt tar zxvf mha4mysql-manager-0.57.tar.gz cd mha4mysql-manager-0.57 perl Makefile.PL make && make install
#After the manager component is installed, several tools will be generated under / usr/local/bin, mainly including the following:
masterha_check_ssh | Check the SSH configuration of MHA |
---|---|
masterha_check_repl | Check MySQL replication status |
masterha_manger | Script to start manager |
masterha_check_status | Detect the current MHA operation status |
masterha_master_monitor | Check whether the master is down |
masterha_master_switch | Control failover (automatic or manual) |
masterha_conf_host | Add or delete configured server information |
masterha_stop | Close manager |
#After the node component is installed, several scripts will also be generated under / usr/local/bin (these tools are usually triggered by MHAManager scripts without manual operation). The main contents are as follows:
save_binary_logs | Save and copy binary logs of master |
---|---|
apply_diff_relay_logs | Identify the different relay log events and apply the different events to other slave s |
filter_mysqlbinlog | Remove unnecessary ROLLBACK events |
- Configure password less authentication on all servers
1. Configure password free authentication to all database nodes on the manager node
ssh-keygen -t rsa #Press enter all the way ssh-copy-id 192.168.86.10 ssh-copy-id 192.168.86.20 ssh-copy-id 192.168.86.30
2. Configure password free authentication to database nodes mysql2 and mysql3 on mysql1
ssh-keygen -t rsa ssh-copy-id 192.168.86.20 ssh-copy-id 192.168.86.30
3. Configure password free authentication to database nodes mysql1 and mysql3 on mysql2
ssh-keygen -t rsa ssh-copy-id 192.168.86.10 ssh-copy-id 192.168.86.30
4. Configure password free authentication to the database nodes mysql1 and mysql2 on mysql3
ssh-keygen -t rsa ssh-copy-id 192.168.86.10 ssh-copy-id 192.168.86.20
- Configure MHA on the manager node
1. On the manager node, copy the relevant scripts to the / usr/local/bin directory
cp -rp /opt/mha4mysql-manager-0.57/samples/scripts /usr/local/bin #After copying, there will be four execution files ll /usr/local/bin/scripts/
master_ip_failover | Script of VIP management during automatic switching |
---|---|
master_ip_online_change | Management of vip during online switching |
power_manager | Script to shut down the host after failure |
send_report | Script for sending alarms after failover |
2. Copy the above script of VIP management during automatic switching to / usr/local/bin directory. Here, use master_ip_failover scripts to manage VIPs and failover
cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin
3. The modification contents are as follows: (delete the original contents, directly copy and modify vip related parameters, and customize vip)
echo '' > /usr/local/bin/master_ip_failover vim /usr/local/bin/master_ip_failover #!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); #############################Add content section######################################### my $vip = '192.168.86.200'; #Specify the address of the vip my $brdc = '192.168.86.255'; #Specify the broadcast address of vip my $ifdev = 'ens33'; #Specify the network card bound by vip my $key = '1'; #Specify the serial number of virtual network card bound by vip my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip"; #Represents that the value of this variable is ifconfig ens33:1 192.168.86.200 my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down"; #Represents that the value of this variable is ifconfig ens33:1 192.168.86.200 down my $exit_code = 0; #Specifies that the exit status code is 0 #my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;"; #my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key"; ################################################################################## GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; exit 0; } else { &usage(); exit 1; } } sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } ## A simple system call that disable the VIP on the old_master sub stop_vip() { `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
4. Create MHA software directory and copy the configuration file, using app1 CNF configuration file is used to manage the mysql node server. The configuration file is generally placed in the / etc / directory
mkdir /etc/masterha cp /opt/mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/masterha
[server default] manager_log=/var/log/masterha/app1/manager.log #manager log manager_workdir=/var/log/masterha/app1.log #manager working directory master_binlog_dir=/usr/local/mysql/data/ #The location where the master saves binlog. The path here should be consistent with the path of binlog configured in the master so that MHA can find it master_ip_failover_script=/usr/local/bin/master_ip_failover #The switching script when setting automatic failover, that is, the script above master_ip_online_change_script=/usr/local/bin/master_ip_online_change #Set the switching script for manual switching password=manager #Set the password of the root user in mysql, which is the password of the monitoring user created in the previous article ping_interval=1 #Set the monitoring master database. The interval between sending ping packets is 1 second, which is 3 seconds by default. When there is no response after three attempts, the failover will be carried out automatically remote_workdir=/tmp #Set the save location of binlog when remote mysql switches repl_password=123456 #Set the password of the replication user repl_user=myslave #Set the user of the replication user report_script=/usr/local/send_report #Set the script of the alarm sent after switching secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.184.20 -s 192.168.86.30 #Specifies the IP address of the slave server to check shutdown_script="" #Set the script to close the failed host after the failure occurs (the main function of this script is to close the host to prevent brain crack, which is not used here) ssh_user=root #Set the login user name of ssh user=mha #Set monitoring user root [server1] hostname=192.168.184.10 port=3306 [server2] hostname=192.168.184.20 port=3306 candidate_master=1 #Set as the candidate master. After setting this parameter, the slave database will be promoted to the master database after the master-slave switch occurs, even if the master database is not the latest slave in the cluster check_repl_delay=0 #By default, if a slave lags behind the master by more than 100M of relay logs, MHA will not choose the slave as a new master, because the recovery of the slave takes a long time; By setting check_repl_delay=0, MHA triggers the switch. When selecting a new master, the replication delay will be ignored. This parameter is set to candidate_ The host with master = 1 is very useful because the candidate master must be a new master in the process of switching [server3] hostname=192.168.184.30 port=3306
Manually start vip on Master server
ifconfig ens33:1 192.168.86.200/24
Test ssh password free authentication on the manager node. If it is normal, it will output successfully.
masterha_check_ssh -conf=/etc/masterha/app1.cnf
Test the mysql master-slave connection on the manager node, and the word MySQL Replication Health is OK appears at the end, indicating that it is normal.
masterha_check_repl -conf=/etc/masterha/app1.cnf
Start MHA on the manager node
nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
- –ignore_last_failover: by default, if MHA detects continuous downtime and the interval between two outages is less than 8 hours, it will not fail. The reason for this restriction is to avoid ping pong effect. This parameter means to ignore the files generated by the last MHA triggered switch. By default, after MHA switches, it will be recorded in the log directory, that is, the log app1 set above failover. Complete file. If the file is found in this directory during the next switch, the switch is not allowed to be triggered unless the file is deleted after the first switch. For convenience, it is set to – ignore here_ last_ failover.
Viewing the MHA status, you can see that the current master is Mysql1 node.
masterha_check_status --conf=/etc/masterha/app1.cnf
Check the MHA log to see that the current master is 192.168.184.20, as shown below.
cat /var/log/masterha/app1/manager.log | grep "current master"
To shut down the manager service, you can use the following command.
masterha_stop --conf=/etc/masterha/app1.cnf Or it can be used directly kill process ID Close by
Record, that is, the log app1 set above failover. Complete file. If the file is found in this directory during the next switch, the switch is not allowed to be triggered unless the file is deleted after the first switch. For convenience, it is set to – ignore here_ last_ failover.
Viewing the MHA status, you can see that the current master is Mysql1 node.
masterha_check_status --conf=/etc/masterha/app1.cnf
Check the MHA log to see that the current master is 192.168.184.20, as shown below.
cat /var/log/masterha/app1/manager.log | grep "current master"
To shut down the manager service, you can use the following command.
masterha_stop --conf=/etc/masterha/app1.cnf Or it can be used directly kill process ID Close by