MySQL: SQL optimization - commonly used SQL optimization

Posted by blues on Wed, 08 May 2019 20:24:04 +0200

I. Mass insertion of data

MyISAM table insertion optimization, insertion before closing index updates.

alter table xxx DISABLE KEYS;                         -- Turn off index updates

... -- insert data

alter table xxx ENABLE KEYS;

Insertion optimization of InnoDB tables:

1) The imported data is arranged according to the primary key, which is more efficient. Because Innodb data is arranged in the order of primary keys

2) Turn off uniqueness checking

SET UNIQUE_CHECKS = 0;

-- insert data

SET UNIQUE_CHECKS = 1;

3) If the application uses the automatic commit (transaction) method, then the automatic commit is closed before insertion, which helps to improve efficiency.

SET AUTOCOMMIT = 0;

-- insert data

SET AUTOCOMMIT = 1;

2. Optimizing insert statements

1) When the amount of data is large, batch insertion is used instead of multiple insert statements.

insert into xx values(1, 2), (3, 4), (5, 6) ...;

2) If multiple rows are inserted from different clients, insert delayed is used to improve the speed. This statement lets insert execute immediately, and the data is placed in a queue in memory without actually writing to disk, which is faster than inserting each statement.

3) Separate the index and data files on different disks (by tabulating options)

4) For MyISAM, increase bulk_insert_buffer_size value to increase batch insertion speed

5) Load data infile is nearly 20 times faster than multiple insert s.

3. Optimizing order by statements

1) Two sorts of MySQL

a. Ordered index sequential scanning returns ordered data directly

select customer_id from customer order by store_id;       -- Using index Efficient

b. Sort the returned results

select * from customer order by store_id;                -- Using Filesort

explain to see if Extra is Using index, Using index is more efficient than Using Filesort

Filesort will sort the data in the sort_buffer_size system variable set in the memory sorting area. If the memory is not enough, the disk data will be divided into blocks, and then the data blocks will be sorted, and then the blocks will be merged into ordered result sets. The sort_buffer_size setting is exclusive to each thread, and MySQL has multiple sort buffer s at the same time.

Optimizing objective: Minimizing additional sorting and returning valid data directly through the index.

The following order by statement will use the index:

select * from xxx order by key_part1, key_part2 ...;            -- Belong to a joint index

select * from xxx where key_part1 = xxx order by key_part1 desc, key_part2 desc, ...;

The following order by statement will not use an index:

select * from xxx order by key_part1 desc, key_part2 asc;   -- blend desc and asc

select * from xxx where key_part2 = xxx order by key_part1;   -- where and order by Field inconsistency

select * from xxx order by key1, key2;                      -- key1 and key2 Two different keywords

2) filesort optimization

There are two sorting algorithms for filesort

Two-scan algorithm: the first one obtains sorting field and row pointer information, and the second one obtains records according to row pointer (which generates a large number of random I/O). This algorithm has less memory overhead.

One-time scanning algorithm: All the rows satisfying the conditions are taken out at one time, and the results are output directly after sorting in the sorting area. High memory overhead and high efficiency.

A) Increase max_length_for_sort_data appropriately to use a more optimized sorting algorithm (one scan)

b) Increase sort_buffer_size appropriately so that sorting can be done in memory rather than in files by creating temporary tables.

c) Select specific fields instead of select*, which reduces the use of sorting areas

4. Optimizing group by statements

By default, MySQL sorts all fields of group by, which can be prohibited:

select xxx from xxx group by xxx order by null;

5. Optimizing nested queries

select * from customer where customer_id not in (select customer_id from payment);

select * from customer a left join payment b on a.customer_id = b.customer_id where b.customer_id is null;

-- join Replacing subqueries is more efficient because join There is no need to create temporary tables in memory.

6. Optimizing Or Conditions

Add an index to each field of OR, otherwise the index involved in execution will not be used

7. Optimizing Paging Query

limit 1000,20: MySQL sorts everything, but only returns 20 items. Obviously, the cost of queries is high.

1) Optimizing idea 1: Sort and paginate the index, and then search other fields from meta-table by primary key.

select film_id, description from film order by title limit 50,5;    -- Using Filesort

select a.film_id, a.description from film a inner join (select film_id from film order by title limit 50,5) b on a.film_id = b.film_id;          

-- Using index

//Because -

select film_id from film order by title limit 50,5 Can use index

select film_id, description from film order by title limit 50,5; No index will be used

2) Convert Limit n,m to Limit m (no duplicate values for sorting fields)

select * from xxx order by abc limit 400, 10;

select * from xxx where id < xxx(410th elements id) order by abc desc limit 10;

8. Using SQL Tips

Select count (*) from XXXX use index (xxxx); -- MySQL is recommended to use index XXX

Select count (*) from XXXX ignore index (xxxx); -- Let MySQL ignore the use of index XXX

Select count (*) from XXXX force index (xxxx); -- Force MySQL to use index XXX, albeit inefficiently

 

Topics: MySQL less SQL