mysqldump usage specification

Posted by fenway on Sat, 29 Jan 2022 01:33:43 +0100

Database is very important. Without backup, data loss can only run away. So make a backup!

1, Tool introduction

mysql dump is a logical backup tool that comes with mysql.

Its backup principle is to connect to mysql database through protocol, query the data to be backed up, and convert the queried data into corresponding insert statements. When we need to restore these data, we can restore the corresponding data by executing these insert statements.

2, Tool features

  • Free backup tools
  • Support remote backup
  • It supports the generation of files in CSV format or XML format
  • You can use the text tool to directly process the corresponding backup data, so as to restore more flexibly and conveniently
  • It has nothing to do with the storage engine. It can be backed up and restored under a variety of storage engines. It supports hot standby for innodb engine and warm standby for MyISAM engine (applying table lock)
  • Precision loss occurs when the data is of floating-point type
  • The backup process is serial, and parallel backup is not supported

3, Backup permissions

Backup objectjurisdiction
tableSELECT
viewSHOW VIEW
triggerTRIGGER
eventEVENT
Stored procedures and functionsSELECT mysql.proc
Dump userSELECT "mysql" system library

LOCK TABLES permission is also required when the – single transaction option is not used for backup

4, Tool usage restrictions

  1. Information will not be backed up by default during mysqldump dump_ SCHEMA, performance_ Schema and sys. If there is a need to dump, you need to explicitly specify them on the command line.

  2. mysqldump will not dump InnoDB CREATE TABLESPACE statement.

  3. mysqldump will not back up the NDB cluster ndbinfo information database.

  4. When using mysqldump backup in a database with GTID enabled, it should be noted that if the backup file contains GTID information, it cannot be restored to a database without GTID enabled.

  5. When Windows dumps through PowerShell using the following command, the dump file uses UTF-16 encoding by default, while MySQL does not allow UTF-16 as a connection character set, so the dump file backed up through the following command will not be loaded into the database correctly

    mysqldump [options] > dump.sql
    
  6. mysqldump is a single thread. When the amount of data is large, the backup time is long, and even the non transaction table locking may affect the business for a long time during the backup process (the backup data recovery time in SQL form is also long).

  7. Use the -- compact parameter with caution. This parameter will remove some parameter settings at the beginning and end of the file (such as time zone, character set...), resulting in hidden dangers.

  8. Parameters -- lock all tables and – flush privileges will perform flush tables and flush privileges respectively during backup, which will generate GTID. Please pay attention when backing up from the library.

  9. Before the full backup of the database, it is necessary to collect the relevant information of the database (storage engine, character set, etc.) to ensure the integrity of the backup content. The following is the collection statement:

    -- View table related information
    select table_schema,table_name,table_collation,engine,table_rows
    from information_schema.tables
    where table_schema NOT IN ('information_schema' , 'sys', 'mysql', 'performance_schema');
      
    -- Check whether there are stored procedures, triggers, scheduling events, etc
    select count(*) from information_schema.events;
    select count(*) from information_schema.routines;
    select count(*) from information_schema.triggers;
       
    -- View character set information
    show variables like 'character%';
    
  10. Some options of mysqldump are different from mysqlpump. Please pay attention to:

mysqldumpmysqlpump
The aliases of – routes and – events are - R and - E respectively– routes and – events do not have aliases
There is a master data optionThere is no master data option, and the master-slave needs to pass through the master before building_ auto_ Position to control, can not intuitively specify binlog and position to build master-slave
-The alias of d is – no data-The alias of d is – skip dump rows
Dump files have DROP TABLE statements by defaultThe dump file does not have DROP TABLE and DROP USER statements by default. Errors may be reported when importing because the user or table exists
The backup does not specify A database or - A will prompt an errorBackup does not specify database or - A. all data is backed up by default. ps: except INFORMATION_SCHEMA, performance_schema, ndbinfo, or sys

5, Known BUG

MySQL5. For all minor versions of 5.7 from 7.7 to now (November 2, 2020), MySQL will be cleared when using mysqldump - all databases Proc makes sys unable to work normally.

mysql> SELECT * FROM sys.processlist;
ERROR 1356 (HY000): View 'sys.processlist' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them

When using MySQL dump, all databases will not back up mysql The stored procedure of the system under proc leads to that some views under sys cannot be used normally after being imported into the new machine; This is a BUG and only exists in MySQL 5 7.

bug connection:

  • https://bugs.mysql.com/bug.php?id=86807

  • https://bugs.mysql.com/bug.php?id=92631

  • https://bugs.mysql.com/bug.php?id=83259

  • https://github.com/mysql/mysql-server/commit/ded3155def2ba3356017c958c49ff58c2cae1830

If the use of the – all databases parameter has caused the sys exception to report an error, this can fix the exception:

mysql_upgrade --upgrade-system-tables --skip-verbose --force

-- use mysql_upgrade Add it when you need it --upgrade-system-tables. Otherwise, the user library table will be scanned, locked during the period, and the speed is average.

6, Precautions before backup

6.1 when you need to backup or import for a long time, please use the screen window tool

If you are making a large table DDL or batch executing an sql file, and the remote connection is disconnected halfway, do you want to spit blood? If the transaction is not started when batch sql is executed, think about the consequences?

When using screen, even if the current remote terminal is disconnected, the task in progress will not be interrupted:

## Install screen
yum install -y screen
  
## Open a screen
screen
  
## View all open screen s
screen -ls
  
## Reconnect to the specified screen
screen -r xxx

6.2 check database parameters

parameterreason
lock_wait_timeoutThe default value is 1 year. If the default setting is not modified or the current value is a long time, when mysqldump encounters a long transaction and gets the lock timeout, mysqldump will not exit and will wait to get the MDL lock. At this time, there is no output in the interface, the user does not enter the database to view, and will not perceive that the backup is stalled and waiting, which will affect the backup progress

6.3 checking database roles

Check the backup role of mysqldump. Note that if you use the following mysqldump command from the library, do not use – master data and – flush logs together, which may cause database deadlock and affect backup and replication.

7. Examples of usage

7.1 full database backup

Note that not only data but also stored procedures, triggers and events need to be backed up:

mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 --flush-logs --hex-blob --triggers --routines --events --all-databases > /tmp/backup/all_db_with_data.sql

Note: if it is MySQL 5 There are bugs in version 7 (see v. known bugs for details). One more step needs to be added:

Perform mysql Proc single table export:
mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 --flush-logs --set-gtid-purged=off --hex-blob --triggers --routines --events --tables mysql proc > /tmp/backup/mysql_proc.sql

The SQL exported above must be started in set @ @ session SQL_ LOG_ BIN= 0; In this case, use MySQL > source mysql_ proc. SQL. Otherwise, it will cause exceptions and affect the establishment of master-slave. If it is not to create master-slave, this import method can be ignored.

Perform mysql Proc single table import:
#Without master-slave replication:
    Multiple import methods are available
#To set up master-slave replication:
    mysql> use mysql
    mysql> SET @@SESSION.SQL_LOG_BIN= 0;
    mysql> source /tmp/backup/mysql_proc.sql

7.2 single database backup

## Backup table structure and data
mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 --flush-logs --set-gtid-purged=off --hex-blob --databases employees > /tmp/backup/employees_all.sql
 
## Backup table structure only
mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 --flush-logs --set-gtid-purged=off --hex-blob --no-data --databases employees > /tmp/backup/employees_schema.sql
  
## Backup data only
mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 --flush-logs --set-gtid-purged=off --hex-blob --no-create-info --databases employees > /tmp/backup/employees_only_data.sql

7.3 single table backup

## Only the salaries table in the employees database is backed up
mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 --flush-logs --set-gtid-purged=off --hex-blob --tables employees salaries > /tmp/backup/employees_salaries.sql
  
## Exclude specified tables from the specified database
mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 --flush-logs --set-gtid-purged=off --hex-blob --ignore-table=employees.departments --ignore-table=employees.employees employees > /tmp/backup/employees_exclude_departments.sql

be careful

If you want to exclude multiple tables in a database during backup, you need to use multiple – ignore table statements, which cannot be separated by commas.

7.4 generate a separate data file (- T)

mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 --flush-logs -T /tmp/salaries --databases employees --tables salaries > employees_salaries.sql

matters needing attention

  • This option value specifies the directory where the table backup file is generated. So add – database --tables as shown above. This option is mutually exclusive with the – database and – all databases options and cannot be used together
  • This option can only be used when mysqldump is running on the same machine as mysqld server
    Because only local directories can be used when generating backup files, and the MySQL account performing backup must also have FILE permission. So, secure_ FILE_ The priv system parameter must be set to an empty string (because the SELECT... Into output statement is called) so that the system login account that starts the mysqld process can generate these backup files to the specified FILE system directory
  • (note that when using this option, the specified directory must be the directory on the backup server and cannot be backed up remotely, because SELECT... Into output can only be written to the local directory of the server, and the user who starts mysqld process needs access rights
Examples of generated files:
less salaries/salaries.txt

10001   60117   1986-06-26      1987-06-26
10001   62102   1987-06-26      1988-06-25
10001   66074   1988-06-25      1989-06-25
10001   66596   1989-06-25      1990-06-25
10001   66961   1990-06-25      1991-06-25

7.5 backup according to conditions (- w)

Dump only rows selected by the given WHERE condition

###Back up the data of the specified table under the specified library according to the where condition
mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 -w 'id > 10 and id < 20' --databases testdb --tables sbtest1 > /tmp/backup/sbtest1.sql
  
###Back up the data of all tables under the specified library according to the where condition
mysqldump -uops -p -h127.0.0.1 -P4444 --default-character-set=utf8mb4 --single-transaction --master-data=2 -w 'id > 10 and id < 20' --databases testdb > /tmp/backup/sbtest1.sql

matters needing attention

  1. If you only use – database instead of – table, you should pay attention to:
    All tables in the same library must have where condition columns and the column data types are consistent. After testing, if a table in the same database does not have a where condition column, mysqldump will report an error; After testing, if there is a table in the same database with the same name and where condition column, but the data types are different, mysqldump will not report an error, but its data cannot be backed up. Only the data of the table meeting the where condition will be backed up.
  2. When only – tables is specified without – databases option, or the parameter representing the library name is not given in the command line option, mysqldump will resolve the first of the command line parameters immediately after – tables as the library name, and all of them will be resolved to the table name from the second parameter.
  3. If a backup object is specified multiple times, it will not be duplicated, but multiple backups will be performed Therefore, it is not recommended that the database name be the same as the table name in the production environment.

8, Data import

8.1 precautions before backup import

(1) Before backup import, check the configuration of the following parameters
parameterproposalreasoncommand
autocommitRecommended openIf it is closed, the import statement may not be submitted automatically, which may lead to transaction blocking, transaction rollback, binlog large transactions, and even database shutdown.set session autocommit = 1;
wait_timeout \ interactive_timeoutIt is recommended to increaseIf the setting is too small and the import time is too long, it will lead to incomplete import, session timeout and disconnection, resulting in task failure.set session wait_timeout=28800; \ set session interactive_timeout=28800;
(2) Check whether the table to be dropped in the SQL file is expected
less all_db_with_data.sql | grep -E "^DROP TABLE IF EXISTS|^USE"

8.2 use PV tool to monitor the file import process

In general backup file import, we can only estimate the actual import time of backup files by experience. Here, PV tool can be used to calculate the derivative residual quantity and completion time more accurately. Therefore, it is recommended to use this method for data import

#Parameter Description:
#-W: It is useful when password input is required. You can wait for the password output to complete, and then turn on the monitoring progress bar
#-50: Current limiting, which limits the transmission to the maximum bytes per second (the size can be customized and the unit can be changed)
shell> pv -W -L 2M all_db_with_data.sql | /data/mysql/base/5.7.25/bin/mysql -uops -p -S/data/mysql/data/3333/mysqld.sock
Enter password: 
 588MiB 0:04:54 [   2MiB/s] [======================================================>] 100%

8.3 mysql client+PV + session parameter import (recommended)

#Parameter Description:
#Reset master: when the importing machine has its own GTID, the import will fail, so you can add the "reset master" command in echo as needed to clear the GTID purge.
  
shell> (echo 'set @@autocommit=1; set @@interactive_timeout=28800; reset master;'; pv -W -L 5M all_db_with_data.sql) | mysql -h127.0.0.1 -P3333 -uops -p
 
shell> (echo 'set @@autocommit=1; set @@interactive_timeout=28800;'; pv -W -L 5M employees_all.sql) | mysql -h127.0.0.1 -P3333 -uops -p employees

9, Special problem specification

9.1 scenario 1:

To prevent excessive binlog generated during SQL import and increase the delay between master and slave, I want to change the log format to statement and reduce binlog during import

Parameters involved:

transaction_isolation

binlog_format

Note:

If the table is an InnoDB table and the transaction isolation level is READ COMMITTED or READ UNCOMMITTED, only row based logging can be used. You can change the log format to statement, but doing so at run time will cause an error because the InnoDB table can no longer execute insert statements.

mysql> select @@binlog_format;
+-----------------+
| @@binlog_format |
+-----------------+
| STATEMENT       |
+-----------------+
1 row in set (0.00 sec)

mysql> show variables like '%iso%';
+-----------------------+----------------+
| Variable_name         | Value          |
+-----------------------+----------------+
| transaction_isolation | READ-COMMITTED |
| tx_isolation          | READ-COMMITTED |
+-----------------------+----------------+
2 rows in set (0.00 sec)

mysql> create table test(id int, name varchar(20));
Query OK, 0 rows affected (0.02 sec)

mysql> insert into test values(1, 'zhou'),(2, 'wei');
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> select * from test;
+------+------+
| id   | name |
+------+------+
|    1 | zhou |
|    2 | wei  |
+------+------+
2 rows in set (0.00 sec)

mysql> insert into test values(3,'bing');
ERROR 1665 (HY000): Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.

There will be binlog when performing backup import_ When the format is temporarily set to the statement requirement, the transaction needs to be checked_ Whether isolation is RR. At the RC level, binlog_ If format is set to statement, insert related operations cannot be performed. Therefore, it is not RR level. Please do not set binlog_ Change format to statement.

9.2 scenario 2:

The backup or import SQL package is too large, exceeding max_allowed_packet, backup or import failed

Parameters involved:

max_allowed_packet

Note:

Determine the character type in the database table during backup or import: if there are character types such as BLOB column or long string, it is recommended to increase this value reasonably.

#  mysqldump -h10.100.143.62 -umeslogin -P3306 -p --default-character-set=utf8mb4 --master-data=2 --flush-logs --single-transaction --set-gtid-purged=off --hex-blob --databases yfmes_metal_sys > yfmes_metal_sys_all.sql
mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table `operlog` at row: 264227

Similar principles apply when importing.

You can modify this parameter temporarily after mysql, mysqldump and other client commands:

mysqldump [xxxx] --max_allowed_packet=256M > dump.sql
mysql [xxxx] --max_allowed_packet=256M < dump.sql

Because there is regret, so the stars and the moon; Because there is a dream, so desperate! Personal blog launch: easydb Net WeChat official account: easydb pays close attention to me, does not lose!

Topics: Database MySQL SQL