Thread synchronization Basics
In concurrent scenarios, sometimes we don't just want to protect data. We also want to synchronize some operations between multiple threads, such as waiting for a condition to be true or executing some operations when an event occurs. The C + + standard library provides condition variables and futures; Concurrency Technical Specification (TS) provides latches and barriers.
Conditional variable
Two condition variables are provided in the standard library: std::condition_variable and std::condition_variable_any, they must be used with mutex to provide synchronization function. The former must be used with std::mutex, and the latter can be used with mutex classes that meet the minimum requirements of mutex. Due to its universality, its performance and overhead will be large.
#include <condition_variable> #include <queue> #include <iostream> #include <thread> std::mutex mut; std::queue<int> data_queue; std::condition_variable data_cond; int prepare_data() { return rand() % 100; } void process(int data) { std::cout << "process data: " << data << " remain: " << data_queue.size() << std::endl; } bool is_last_chunk(int data) { return data == 0; } void data_preparation_thread() { //srand is only valid for the current thread srand((unsigned int)time(nullptr)); while (true) { const int data = prepare_data(); { std::lock_guard<std::mutex> lk(mut);//1 data_queue.push(data); }//3 data_cond.notify_one();//4 if (is_last_chunk(data)) break; } } void data_processing_thread() { while (true) { std::unique_lock<std::mutex> lk(mut);//2 data_cond.wait( lk, [] { return !data_queue.empty(); }); int data = data_queue.front(); data_queue.pop(); lk.unlock(); process(data);//5 if (is_last_chunk(data)) break; } } int main() { std::thread thread1(data_preparation_thread); std::thread thread2(data_processing_thread); thread1.join(); thread2.join(); }
Here, the scenario of one thread producing data and another thread fetching data is simulated. The produced data is a series of arrays, and 0 represents the data end tag.
Obviously, the mutex lock (1,2) must be obtained before taking and putting data. Use lock directly on production data threads_ Guard is the most convenient. We release the mutex at position 3 through a separate scope. After push ing the data, you can use notify of the condition variable_ The one method notifies the thread waiting for the current condition,
For data processing threads, we use unique_lock, not lock_guard, this is because unique_lock can be unlocked and locked manually, which is not only conducive to improving efficiency (it is not necessary to hold lock 5 when processing data), but also the requirement of the conditional variable wait method: the wait method passes in two parameters, and the first parameter is a unique parameter_ Lock, the second parameter is the callable object, indicating the waiting condition. Obtain the mutex before judging the condition, and then execute the callable object. If false is returned, release the lock and continue to wait; If true is returned, continue to hold the lock until unlock or leave the scope to automatically release the lock. The so-called "wait" refers to the thread entering the blocking or waiting state. When other threads call notify in the same condition variable_ One, the waiting thread enters the process of obtaining the lock again and determining whether the conditions are met. If not, it continues to wait, and if it is met, it continues to execute.
Note that because condition detection can be performed many times, the passed in function or callable object cannot have side effects
By running the program, it is found that not every notify_one will wake up the waiting thread. This is because notify_one's frequency is too fast? Or a time slice? Or efficiency? The reference outputs are as follows:
process data: 92 remain: 12 process data: 41 remain: 25 process data: 93 remain: 24 process data: 95 remain: 23 process data: 39 remain: 22 process data: 68 remain: 21 process data: 82 remain: 20 process data: 74 remain: 19 process data: 95 remain: 18 process data: 38 remain: 17 process data: 26 remain: 16 process data: 81 remain: 15 process data: 98 remain: 14 process data: 39 remain: 13 process data: 95 remain: 12 process data: 7 remain: 11 process data: 32 remain: 10 process data: 48 remain: 9 process data: 48 remain: 8 process data: 20 remain: 7 process data: 89 remain: 6 process data: 12 remain: 5 process data: 20 remain: 4 process data: 43 remain: 3 process data: 27 remain: 2 process data: 9 remain: 1 process data: 0 remain: 0