Delete the database without running away, MySQL database recovery tutorial

Posted by tunari on Tue, 11 Jan 2022 03:48:29 +0100

In work, we delete data or database by mistake. Do we have to run away? I don't think so. Programmers must learn to save themselves and retrieve the data unconsciously.

In MySQL database, we know that binlog logs record all our operations on the database, so binlog logs are a sharp tool for us to save ourselves.

Today, taking MySQL database as an example, I will teach you how to recover. In daily work, there will always be wrong deletion of database tables and data caused by hand shaking, wrong conditions, wrong table name and wrong connection to production database.

On the premise of data recovery, make a backup and start binlog in row format. If there is no backup file, the library table will be deleted after deleting it. If there are records in lsof, it is possible to restore some files. But if the database doesn't open this table file, you can only run away. If binlog is not enabled, the data from the backup time point will be lost after data recovery. If the binlog format is not row, there is no way to flash back after misoperation of data, and you can only honestly follow the backup and recovery process.

Direct recovery is the most common scenario of full recovery using backup files.

2.1 full recovery of mysqldump backup

Using mysqldump file to recover data is very simple. It is directly decompressed and executed:

gzip -d backup.sql.gz | mysql -u<user> -h<host> -P<port> -p

2.2 xtrabackup backup full recovery

Recovery process:

# Step 1: decompress (if there is no compression, you can ignore this step)
innobackupex --decompress <Directory of backup files>

# Step 2: apply log
innobackupex --apply-log <Directory of backup files> 

# Step 3: copy the backup file to the data directory
innobackupex --datadir=<MySQL Data directory> --copy-back <Directory of backup files>

2.3 point in time based recovery

Point in time recovery relies on binlog logs. You need to find all logs from the backup point to the recovery point from binlog, and then apply them. Let's test it.

New test table:

chengqm-3306>>show create table mytest.mytest \G;
*************************** 1. row ***************************
       Table: mytest
Create Table: CREATE TABLE `mytest` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `ctime` datetime DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Insert one piece of data per second:
[mysql@mysql-test ~]$ while true; do mysql -S /tmp/mysql.sock -e 'insert into mytest.mytest(ctime)values(now())';date;sleep 1;done
Backup:
[mysql@mysql-test ~]$ mysqldump --opt --single-transaction --master-data=2 --default-character-set=utf8 -S /tmp/mysql.sock -A > backup.sql

Find the log location at the time of backup:

[mysql@mysql-test ~]$ head -n 25 backup.sql | grep 'CHANGE MASTER TO MASTER_LOG_FILE'
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000032', MASTER_LOG_POS=39654;

Suppose we want to recover to the time point of 2019-08-09 11:01:54, we can find the logs from 39654 to 019-08-09 11:01:54 from binlog.

[mysql@mysql-test ~]$ mysqlbinlog --start-position=39654 --stop-datetime='2019-08-09 11:01:54' /data/mysql_log/mysql_test/mysql-bin.000032 > backup_inc.sql
[mysql@mysql-test-83 ~]$ tail -n 20 backup_inc.sql
......
### INSERT INTO `mytest`.`mytest`
### SET
###   @1=161 /* INT meta=0 nullable=0 is_null=0 */
###   @2='2019-08-09 11:01:53' /* DATETIME(0) meta=0 nullable=1 is_null=0 */
......

Current number of data entries:

-- 2019-08-09 11:01:54 Previous data pieces
chengqm-3306>>select count(*) from mytest.mytest where ctime < '2019-08-09 11:01:54';
+----------+
| count(*) |
+----------+
|      161 |
+----------+
1 row in set (0.00 sec)

-- Number of all data pieces
chengqm-3306>>select count(*) from mytest.mytest;
+----------+
| count(*) |
+----------+
|      180 |
+----------+
1 row in set (0.00 sec)

Then perform the recovery:

# Full recovery
[mysql@mysql-test ~]$ mysql -S /tmp/mysql.sock < backup.sql 

# Apply incremental log
[mysql@mysql-test ~]$ mysql -S /tmp/mysql.sock < backup_inc.sql

Check data:

chengqm-3306>>select count(*) from mytest.mytest;
+----------+
| count(*) |
+----------+
|      161 |
+----------+
1 row in set (0.00 sec)

chengqm-3306>>select * from mytest.mytest order by id desc limit 5;
+-----+---------------------+
| id  | ctime               |
+-----+---------------------+
| 161 | 2019-08-09 11:01:53 |
| 160 | 2019-08-09 11:01:52 |
| 159 | 2019-08-09 11:01:51 |
| 158 | 2019-08-09 11:01:50 |
| 157 | 2019-08-09 11:01:49 |
+-----+---------------------+
5 rows in set (0.00 sec)

It has been restored to 11:01:54 on August 9, 2019.

3.1 restore a table from mysqldump backup

Suppose the table to be restored is mytest mytest:

# Extract all data from a library
sed -n '/^-- Current Database: `mytest`/,/^-- Current Database:/p' backup.sql > backup_mytest.sql

# Extract table creation statement from library backup file
sed -e'/./{H;$!d;}' -e 'x;/CREATE TABLE `mytest`/!d;q' backup_mytest.sql > mytest_table_create.sql

# Extract insert data statements from library backup files
grep -i 'INSERT INTO `mytest`' backup_mytest.sql > mytest_table_insert.sql

# Restore table structure to mytest Library
mysql -u<user> -p mytest < mytest_table_create.sql

# Restore the table data to mytest Mytest table
mysql -u<user> -p mytest <  mytest_table_insert.sql

3.2 restoring a table from xtrabackup backup

Assumptions/ backup_ xtra_ The full directory is the backup file that has been applied to the log after decompression.

3.2.1 MyISAM table

Suppose you restore the table mytest. 1 from the backup file t_ myisam. Find t from backup file_ myisam. frm, t_ myisam. MYD, t_ myisam. Copy the three MYI files to the corresponding data directory and authorize them to enter MySQL. Check list:

3.2.2 Innodb table

Suppose you restore the table mytest. 1 from the backup file t_ InnoDB, the prerequisite for recovery is to set innodb_file_per_table = on:

  • Start a new instance;

  • Build a table as like as two peas in the same example.

  • Execute alter table t_innodb discard tablespace; Delete the table space. This operation will delete t_innodb.ibd delete;

  • Find t from backup file_ innodb. IBD this file, copy it to the corresponding data directory, and authorize it;

  • Execute alter table t_innodb IMPORT tablespace; Load tablespace;

  • Execute flush table t_innodb;check table t_innodb; Checklist;

  • Use mysqldump to export the data, and then import it to the database to be recovered.

be careful:

  • To avoid risks, restore and dump the new instance. If it is a test, you can directly operate steps 2-6 on the original database;

  • Only valid before version 8.0.

    Skip misoperation SQL is generally used to perform operations that cannot be flashback, such as drop table\database.

    4.1 skip using backup file recovery

    4.1.1 do not open GTID

    The steps of using backup file recovery are similar to those based on point in time recovery. The difference is that there is one more operation to find binlog. For example, I set up two tables a and b here, insert a piece of data every minute, then make a full backup, and then delete table b. now I want to skip this SQL.

    Database status after deleting table b:

  • chgnqm-3306>>show tables;
    +------------------+
    | Tables_in_mytest |
    +------------------+
    | a                |
    +------------------+
    1 row in set (0.00 sec)

     1. Find the log location at the time of backup

  • [mysql@mysql-test ~]$ head -n 25 backup.sql | grep 'CHANGE MASTER TO MASTER_LOG_FILE'
    -- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000034', MASTER_LOG_POS=38414;

    2. Find the pos location where the drop table statement is executed

  • [mysql@mysql-test mysql_test]$  mysqlbinlog -vv /data/mysql_log/mysql_test/mysql-bin.000034 | grep -i -B 3 'drop table `b`';
    # at 120629
    #190818 19:48:30 server id 83  end_log_pos 120747 CRC32 0x6dd6ab2a     Query    thread_id=29488    exec_time=0    error_code=0
    SET TIMESTAMP=1566128910/*!*/;
    DROP TABLE `b` /* generated by server */

    From the results, we can see that the start position of the drop statement is 120629 and the end position is 120747.

    3. Extract other records that skip this statement from binglog

  • # The start position of the first item is the pos position of the backup file, and the stop position is the start position of the drop statement
    mysqlbinlog -vv --start-position=38414 --stop-position=120629 /data/mysql_log/mysql_test/mysql-bin.000034 > backup_inc_1.sql
    
    # The start position of the second item is the end position of the drop statement
    mysqlbinlog -vv --start-position=120747 /data/mysql_log/mysql_test/mysql-bin.000034 > backup_inc_2.sql

    4. Restore backup files

  • [mysql@mysql-test ~]$ mysql -S /tmp/mysql.sock < backup.sql

    Status after full recovery:

  • chgnqm-3306>>show tables;
    +------------------+
    | Tables_in_mytest |
    +------------------+
    | a                |
    | b                |
    +------------------+
    2 rows in set (0.00 sec)
    
    chgnqm-3306>>select count(*) from a;
    +----------+
    | count(*) |
    +----------+
    |       71 |
    +----------+
    1 row in set (0.00 sec)

    5. Restore incremental data

  • [mysql@mysql-test ~]$ mysql -S /tmp/mysql.sock < backup_inc_1.sql
    [mysql@mysql-test ~]$ mysql -S /tmp/mysql.sock < backup_inc_2.sql

    After restoring the status, you can see that the drop statement has been skipped:

chgnqm-3306>>show tables;
+------------------+
| Tables_in_mytest |
+------------------+
| a                |
| b                |
+------------------+
2 rows in set (0.00 sec)

chgnqm-3306>>select count(*) from a;
+----------+
| count(*) |
+----------+
|      274 |
+----------+
1 row in set (0.00 sec)

4.1.2 start GTID

Use GTID to skip the wrong SQL directly:

  • Find the log location during backup;

  • Find out the GTID value of the drop table statement executed;

  • Export the log location to the latest binglog log during backup;

  • Restore backup files;

  • Skip this GTID;

SET SESSION GTID_NEXT='Corresponding GTID value';
BEGIN; COMMIT;
SET SESSION GTID_NEXT = AUTOMATIC;

Apply the incremental binlog log obtained in step 3.

4.2 skip using delay library

4.2.1 do not open GTID

The key operation of using deferred library recovery is start slave until. I set up two MySQL nodes in the test environment. Node 2 delays 600 seconds, creates two tables a and b, and inserts a data every second to simulate business data insertion.

localhost:3306 -> localhost:3307(delay 600)

Current node 2 status:

chengqm-3307>>show slave status \G;
...
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000039
          Read_Master_Log_Pos: 15524
               Relay_Log_File: mysql-relay-bin.000002
                Relay_Log_Pos: 22845
        Relay_Master_Log_File: mysql-bin.000038
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
...
        Seconds_Behind_Master: 600
...

Current node table 2:

chengqm-3307>>show tables;
+------------------+
| Tables_in_mytest |
+------------------+
| a                |
| b                |
+------------------+

Delete table b at node 1:

chengqm-3306>>drop table b;
Query OK, 0 rows affected (0.00 sec)

chengqm-3306>>show tables;
+------------------+
| Tables_in_mytest |
+------------------+
| a                |
+------------------+
1 row in set (0.00 sec)

The next step is to skip this SQL operation step.

1. Delay library stop synchronization

stop slave;

2. Find the pos position of the previous sentence that executed the drop table statement

[mysql@mysql-test ~]$ mysqlbinlog -vv /data/mysql_log/mysql_test/mysql-bin.000039 | grep -i -B 10 'drop table `b`';
...
# at 35134
#190819 11:40:25 server id 83  end_log_pos 35199 CRC32 0x02771167     Anonymous_GTID    last_committed=132    sequence_number=133    rbr_only=no
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 35199
#190819 11:40:25 server id 83  end_log_pos 35317 CRC32 0x50a018aa     Query    thread_id=37155    exec_time=0    error_code=0
use `mytest`/*!*/;
SET TIMESTAMP=1566186025/*!*/;
DROP TABLE `b` /* generated by server */

From the results, we can see that the starting position of the previous sentence of the drop statement is 35134, so we synchronize to 35134 (don't choose this wrong).

3. Delay database synchronization to the previous SQL to skip

change master to master_delay=0;
start slave until master_log_file='mysql-bin.000039',master_log_pos=35134;

View the status and see that it has been synchronized to the corresponding node:

chengqm-3307>>show slave status \G;
...
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000039
          Read_Master_Log_Pos: 65792
...
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
          Exec_Master_Log_Pos: 35134
...
               Until_Log_File: mysql-bin.000039
                Until_Log_Pos: 35134

4. Skip an SQL and start synchronization

set global sql_slave_skip_counter=1;
start slave;

Check the synchronization status. The statement to delete table b has been skipped:

chengqm-3307>>show slave status \G;
...
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
...
1 row in set (0.00 sec)

chengqm-3307>>show tables;
+------------------+
| Tables_in_mytest |
+------------------+
| a                |
| b                |
+------------------+
2 rows in set (0.00 sec)

4.2.2 start GTID

The steps to skip using GTID will be much simpler. Just execute a transaction that is the same as the GTID of the SQL to be skipped.

The flashback operation is a reverse operation. For example, if delete from a where id=1 is executed, the flashback will execute the corresponding insert into a (id,...) values(1,...) for misoperation of data. It is only valid for DML statements, and the binlog format is required to be set to ROW. This chapter introduces two easy-to-use open source tools.

  • Stop synchronization;

  • Find out the GTID of the drop table statement executed;

  • Execute the transaction of this GTID;

  • SET SESSION GTID_NEXT='Corresponding GTID value';
    BEGIN; COMMIT;
    SET SESSION GTID_NEXT = AUTOMATIC;

  • Continue synchronization;

  • Flashback.

Binlog2sql is a public comment open source tool for parsing binlog. It can be used to generate flashback statements. The project address is binlog2sql.

5.1.1 installation

wget https://github.com/danfengcao/binlog2sql/archive/master.zip -O binlog2sql.zip
unzip binlog2sql.zip
cd binlog2sql-master/

# Installation dependency
pip install -r requirements.txt

5.1.2 generate rollback SQL

python binlog2sql/binlog2sql.py --flashback \
-h<host> -P<port> -u<user> -p'<password>' -d<dbname> -t<table_name>\
--start-file='<binlog_file>' \
--start-datetime='<start_time>' \
--stop-datetime='<stop_time>' > ./flashback.sql

python binlog2sql/binlog2sql.py --flashback \
-h<host> -P<port> -u<user> -p'<password>' -d<dbname> -t<table_name> \
--start-file='<binlog_file>' \
--start-position=<start_pos> \
--stop-position=<stop_pos> > ./flashback.sql

MyFlash is a tool for rolling back DML operation developed and maintained by the technical engineering department of meituan review company. The project link is MyFlash.

Limitations:

  • Binlog format must be row and binlog_row_image=full;

  • Only 5.6 and 5.7 are supported;

  • Only DML (add, delete, modify) can be rolled back.

5.2.1 installation

# Dependency (centos)
yum install gcc*  pkg-config glib2 libgnomeui-devel -y

# Download File
wget https://github.com/Meituan-Dianping/MyFlash/archive/master.zip -O MyFlash.zip
unzip MyFlash.zip
cd MyFlash-master

# Compile and install
gcc -w  `pkg-config --cflags --libs glib-2.0` source/binlogParseGlib.c  -o binary/flashback
mv binary /usr/local/MyFlash
ln -s /usr/local/MyFlash/flashback /usr/bin/flashback

5.2.2 use

Generate rollback statement:

flashback --databaseNames=<dbname> --binlogFileNames=<binlog_file> --start-position=<start_pos> --stop-position=<stop_pos> 

Binlog will be generated after execution_ output_ base. The flashback file needs to be parsed with mysqlbinlog before use:

mysqlbinlog -vv binlog_output_base.flashback | mysql -u<user> -p

Welcome to WeChat official account [Xiamen micro think network]. www.xmws.cn 19th anniversary of professional IT certification training

Main courses: Cisco, Huawei, red hat, ORACLE, VMware, CISP, PMP and other certification training and research

Topics: Database MySQL Oracle MariaDB DBA