A production environment mysql migration operation (I) data archiving
A production environment mysql migration operation (2) mysql space release (defragmentation)
The fixed time archive is added above. Now some large table disk space has not been released, resulting in larger and larger database files. Now we will introduce the data import and export scheme.
Scheme selection
For the time being, there are two options:
1. Shut down for several hours, use mysqldump for backup (large tables only export data of the last 10 days) and recovery, and set the shared tablespace as independent tablespace
2. mysql uses OPTIMIZE to defragment
2. Set up the master-slave cluster, record the backup time point, then import the data back to the slave database, and finally switch the slave database to the master database.
After comparison of 1, 2 and 3, scheme I is easy to operate and flexible. You can import the data of the past day first, and then import the data slowly. Scheme II needs to lock the table for a long time. However, our database version is 5.5.47 and the InnoDB file per table parameter is not set, so it is not suitable for this scheme. Scheme III is cumbersome to operate and takes a long time. After evaluation, the downtime can be To shorten to 1 hour, select option 1 finally.
Implementation
Prefix steps
1. Export recent half month data from our archived database.
mkdir -p /mnt/backup/mysql/ cd /mnt/backup/mysql/ mysqldump -uroot -p1 -t --skip-add-locks --single-transaction --quick --databases xxx --tables table1 --where 'clock >1543161600 and clock <=1543939200' >table1_1126-1204.sql mysqldump -uroot -p1 -t --skip-add-locks --single-transaction --quick --databases xxx --tables table2 --where 'clock >1543161600 and clock <=1543939200' >table2_1126-1204.sql ... ... ...
2. Export data from production database for last day
For example, we are going to upgrade on December 10. Here, we can go around the data of six tables in advance. This step can not stop the service and does not affect the normal operation of the database.
mkdir -p /mnt/backup/mysql/ cd /mnt/backup/mysql/ mysqldump -uroot -p1 -t --skip-add-locks --single-transaction --quick --databases xxx --tables table1 --where 'clock >1543939200 and clock <=1544371200' >table1_1205-1209.sql mysqldump -uroot -p1 -t --skip-add-locks --single-transaction --quick --databases xxx--table2 --where 'clock >1543939200 and clock <=1544371200' >table2_1205-1209.sql ... ... ...
Operation steps
1. Stop related services
systemctl stop httpd systemctl stop supervisord systemctl stop xxx-server.service
2. Execute cleanup script
Since these large tables have been partitioned, only the partition of the current day needs to be reserved for cleaning. The previous partitions can be drop ped. Because different library cleaning scripts are different, I will not list the scripts here
3. Export remaining table data
Use mysqldump to export the data of all databases except those six big tables in the main database
mkdir -p /mnt/backup/mysql/ cd /mnt/backup/mysql/ mysqldump -R -uroot -p1 --databases db1 db2alert db3 guacamole --ignore-table=db1.table1 --ignore-table=db1.table2 > mysqldump.sql
Backup user and database information in mysql database
mysqldump -t -uroot -p1 --databases mysql> mysql_info_data.sql
4. Using mysqldump to export the cleaned data of six large tables
cd /mnt/backup/mysql/ mysqldump -uroot -p1 --databases xxx --tables table1 --where 'clock >1544371200' >table1.sql mysqldump -uroot -p1 --databases xxx --tables table2 --where 'clock >1544371200' >table2.sql ... ... ...
5. Delete all databases
mysql -uroot -p1 drop database db1; drop database db2; drop database db3;
Delete the log file under data / var/lib/mysql
systemctl stop mariadb rm -rf /var/lib/mysql/ib* #See if the space is free df
6. Configure / etc/my.cnf
Modify / etc/my.cnf, where you need to set the shared tablespace as a separate tablespace.
Note: after mysql 5.6.7, InnoDB file per table = on will be enabled by default
The current version of the service is 5.5.47, so it needs to be configured in my.cnf. openstack uses 5.6.22, so it doesn't need to be set.
Add the following sentence:
[mysqld] innodb-file-per-table=1 systemctl start mariadb
mysql -uroot -p1 #Check whether the setting is effective. If it is on, it means it is effective show variables like '%per_table%';
7. Batch import database
cd /mnt/backup/mysql mysql -uroot -p1 source mysqldump.sql; use db1 source table1.sql; source table2.sql; ... ... ...
8. Validation data
mysql -uroot -p1 use db1 show table status like 'table1%';
9. Restart service and verify
systemctl start httpd systemctl start supervisord systemctl start xxx-server.service
It can be said that the service is running normally now. Now it is slowly importing historical data
Post step
Import historical data
cd /mnt/backup/mysql mysql -uroot -p1 use db1 source table1_1205-1209.sql; source table2_1205-1209.sql; ... ... ...