C + + multithreading Foundation

Posted by webbwbb on Sat, 18 Dec 2021 06:43:47 +0100

1. Thread Foundation

This part is in the header file < thread >

Main contents:

std::thread t1(func, para);
std::thread t2(&A::func, &Aobj, para);
t1.join();
t1.detech();
t1.joinable();

1.1 creating threads

  • Basic mode: std::thread obj(func), func is an adjustable object, which can be:
    1. function
    2. Function object – overload () operator
    3. lambda expressions
    4. Class member function – requires additional input parameters (specified class object)

1.2 thread parameters

  • Basic mode: thread obj(func, para1, para2,...)
    • Whether you pass in an object or its application, the thread object will copy a copy
    • You can use std::ref to force an incoming object into a reference
    • When an object is passed in for implicit conversion, it needs to be explicitly converted when it is passed in
    • If you pass the pointer, try not to use detect. It is possible that the address referred to will be released before the end of the thread

1.3 adding threads

  • Add: obj The main thread of join() will wait for the thread to end before ending

  • Separation: obj The detect () thread hangs in the background and is no longer related to the main thread

  • Judgment: bool B = obj Can joinable() use join()

    • If you can use join(), you can also use detect()

2. Data sharing between threads

This part is in the header file < thread >

Main contents:

std::mutex mut1;
mut1.lock();
mut1.unlock();
std::lock_gaurd<mutex> lGaurd(mut1);
std::lock_gaurd<mutex> lGaurd(mut1, std::adopt_lock);
std::lock(mut1, mut2, ...);
std::unique_lock<mutex> u1(mut1);
unique_lock<mutex> u1(mut1, adopt_lock);
unique_lock<mutex> u2(mut1, defer_lock);
unique_lock<mutex> u3(mut1, try_to_lock);
u2.lock();
u2.unlock();
u3.owns_lock();
mutex* pm = u3.release();
u2.try_lock();
std::once_flag flag1;
std::call_once(flag1, func);
std::condition_variable myCV;
myMut.notify_one();
myMut.notify_all();
myMut.wait(u1,[bool_lambda]);

2.1. Mutex and lock

  • When multiple threads access shared data, prevent conditional mutual exclusion
  • Therefore, only one thread can access the shared data at the same time, which is realized by mutual exclusion and lock
    • std::mutex myMutex defines a mutex
    • myMutex. Lock
    • myMutex.unlock() unlock
  • When a mutex is locked on a thread, another thread will wait at the lock until the first thread is unlocked
  • To prevent forgetting to unlock, use STD:: lock_ gaurd<mutex> l1(myMutex)
    • In this command, it is locked and automatically unlocked when a scope is ended
    • You can use {} to control the unlocking time
    • Class RAII - resource acquisition is initialization - resource acquisition is initialization, and destruct automatically releases resources

2.1. 1 exclusive mutex

  • Mutex is also called exclusive mutex. When you lock yourself, others can't lock it
  • Is the most commonly used

2.1. 2 recursive mutex

  • recursive_mutex – recursive mutex
  • The unified mutex of the same thread can be lock ed multiple times
    • For example, if a function is called in the code segment after a mutex is locked, and the same mutex is locked in the function, no exception will be reported
  • But generally speaking, its efficiency is lower, and there is generally room for optimization

2.1. 3 mutex with timeout function

  • timed_mutex – exclusive
  • recursive_timed_mutex – recursive
  • Member method
    • Declaration: timed_mutex tMut;
    • Wait for a period of time and return false if you can't get the lock
      • tMut.try_lock_for(time1)
      • Time: std::chrono::seconds(4)
    • Wait for a time point, and return false if you can't get the lock
      • tMut.try_lock_until(time2);
      • Time: STD:: Chrono:: steely_ clock::now()+time1

2.2 deadlock and its prevention

  • When multiple locks need to be locked at the same time and locked in different order at multiple positions, deadlock may occur
  • The main methods to prevent deadlock are:
    • Ensure that each locked position is locked in the same order
  • You can use std::lock(mut1, mut2, mut...)
    • Lock multiple mutexes at the same time. Once a mutex is locked, other locks will be automatically released and locked together when all are unlocked
    • The disadvantage is that it needs to be unlocked manually
  • With std::adopt_lock to unlock automatically
std::lock(mut1, mut2, mut...); // Lock at the same time
// Automatic unlocking
std::lock_gaurd<mutex> l1(mut1, std::adopt_lock);
std::lock_gaurd<mutex> l2(mut2, std::adopt_lock);
...

2.3 unique_lock class

  • And std::lock_gaurd is similar, but more flexible and less efficient
  • Usage: STD:: unique_ lock<mutex> u1(mut1);
    • Create a unique_lock is associated with mut1, locked and unlocked automatically
  • Parameters and methods:
    1. unique_lock<mutex> u1(mut1, adopt_lock);
      • Indicates that mut1 is associated and locked
    2. unique_lock<mutex> u2(mut1, defer_lock);
      • Indicates that mut1 is associated but not locked
      • You can use U2 later Lock () and U2 Unlock() to lock and unlock flexibly
    3. unique_lock<mutex> u3(mut1, try_to_lock);
      • Indicates that mut1 is associated and a lock is attempted
      • Use bool B = U3 owns_ Lock () can know whether the lock is successful and branch
    4. u3.try_lock() also returns whether the lock is successful, provided that U3 initializes with defer_lock. In short, the mutex held by U3 is unlocked before use
    5. mutex* pm = u3.release(), which directly breaks the relationship between U3 and mut1 and returns a pointer to mut1, but PM is required to unlock it at this time.

2.4 multithreading in singleton mode

  • Basic writing of singleton mode
class Singtn{
public:
    static Singtn* getInstance(){ // Singleton interface
        if(!instance){
          createInstance();
        }
        return instance;
    }

    void print(){ // Member function
        cout << "my singleton!\n";
    }

    class TrashRecicle{ // garbage collection
    public:
        ~TrashRecicle(){ // Free memory in destruct
            if(Singtn::instance){
                delete Singtn::instance;
                Singtn::instance = NULL;
            }
        }
    };

private:
    static Singtn* instance; // Singleton pointer

    Singtn(){} // Private construction

    static void createInstance(){ // Singleton initialization
        instance = new Singtn();
        static TrashRecicle tr;
    }
};

// Pointer initialization
Singtn* Singtn::instance = NULL;

// usage method
Singtn* pSgt = Singtn::getInstance();
pSgt->print();
  • When multiple threads use singleton mode, it is possible for multiple threads to call the createInstance() function at the same time

Two solutions:

  1. Double lock (recommended)
std::mutex mut1;
static Singtn* Singtn::getInstance(){ // Modify singleton interface
    if(!instance){ // Double locking
      std::lock_gaurd<mutex> lg1(m1);
      if(!instance){
        createInstance();
      }
    }
    return instance;
}
2. `std::call_once()` (Code is simpler, but less efficient)
std::once_flag flag1;
static Singtn* Singtn::getInstance(){ // Modify singleton interface
    std::call_once(flag1, createIncetance);
    return instance;
}

2.5 conditional variables

  • Some operations can only be performed when the shared data meets certain conditions
    • But if you only judge by conditional branches and loops
    • As long as the conditions are not met, there will be non-stop circulation, high resource occupancy and ineffective work
    • You want to sleep directly when the conditions are not met, and wake up when the conditions are met
  • Using conditional variables
/* -------------------Background---------------------- */
/*   1. "Work 2 can be executed only after work 1 is executed once or more times
     2. "After "work 2" is executed, the results of "work 1" may return to the status that has not been done
       So that "work 2" can not be carried out again           */
/* -------------------------------------------- */

// 1. Create condition variables and mutexes
std::condition_variable myCV;
std::mutex myMut;

// Thread 1, do "work 1"
void thread_func1()
{
  std::unique_lock<std::mutex> uLock(myMut); // Lock
  sharedData.doSomething_1(); // Work 1 processes shared data
  // It has been processed once, and the execution conditions of "work 2" are met. Wake it up
  myMut.notify_one(); // 3. Wake up the dormant thread
}

// Thread 2, do work 2
void thread_func2()
{
  std::unique_lock<std::mutex> uLock(myMut); // Lock
  // 2. Judge whether 1 has been executed so that work 2 can be executed
  myMut.wait(uLock, [sharedData]{
    return sharedData.thing_1_done();
  }); 
  // lambda expression returns true to continue; Returning false will unlock uLock first, and then sleep on the line until thread 1 uses notiy_one() wakes it up. After waking up:
    // (1) Try locking again
    // (2) After locking, judge the lambda expression again and continue only when it is true, otherwise sleep again
  sharedData.doSomething_2(); // Work 2 processes shared data
}
  • If there are multiple threads wait ing, notify_one() can wake up only one at random. If you want to wake up multiple, you need to use notify_all() function.
  • False wakeup – wakes up when conditions are not met or wakes up multiple times
    • You need a lambda expression in the condition variable to ensure that the condition is met

3. future class

The following methods and classes are in the header file < future >

Main contents:

std::future<int> fu = std::async(func, para...);
future<int> fu = async(std::launch::async,func, para...);
fu = async(std::launch::deferred,func, para...);
fu.get();
fu.wait();
std::future_status myStt;
myStt = fu.wait_for(std::chrono::seconds(10));
std::package_task<int(int)> pkt(thread_func);
std::thread t1(std::ref(pkt), para..); 
pkt(para); 
fu = pkt.get_future();
std::promise<int> res;
res.set_value(val);
fu = res.get_future();
myStt == std::future_status::timeout;
myStt == std::future_status::ready;
myStt == std::future_status::deferred;
std::shared_future<int> myFu_s(myFu.share());
std::shared_future<int> myFu_s(std::move(myFu));
std::shared_future<int> myFu_s(res.get_future());
std::shared_future<int> myFu_s(pkt.get_future());

3.1 return the future object with std::async()

Four methods of use:

  • future<int> fu = async(func, para...)

    • Do not use the first parameter = = the first parameter is any
      • any = async | deferrd indicates which method is automatically selected by the system
      • If resources are tight, choose deferred; Otherwise, select async
    • Use this function to bind asynchronous tasks
  • future<int> fu = async(std::launch::async,func, para...)

    • Start the thread directly from this sentence
      • It belongs to forced thread creation. If the system resources are tight, it may crash
    • In Fu Get() or Fu Wait for the thread to end at wait()
    • If not, return 0 on the main thread; Wait for thread end at
  • future<int> fu = async(std::launch::deferred,func, para...)

    • In Fu Get() or Fu The thread starts at wait(), and will not be executed if it does not
    • And actually in Fu Get() or Fu func is called at the thread where wait() is located

3.2 method of future object

  • int res = fu.get()
    • The result stored in fu is sent to res by moving
    • Therefore, it can only be executed once
  • fu.wait()
    • Similar to thread Join(), wait for the thread to end

3.3 package_task class

  • Purpose – wrap the function to return the future object
  • use
// Thread entry function
int thread_func(int para);
// packing
std::package_task<int(int)> pkt(thread_func)
// ------------------^^^The above is common operation^^^------------------------
// Usage 1:
std::thread t1(std::ref(pkt),para); // Create a thread -- note the use of references
t1.join(); // Main thread waiting
// Usage 2: -- actually equivalent to calling a function without starting a new thread
pkt(para); // Call entry function
// ------------------vvv the following is the common operation vvv-------------------------
std::future<int> fu = pkt.get_future(); // Get return value
int res = fu.get();
  • The copy constructor does not exist, so it cannot be copied, but can only be moved
  • And get_future() can only be executed once

3.4 promise class

  • Purpose – pass the reference of promise object into void entry function
    • Then convert it to a future object
  • usage
void thread_func(proimse<int> &res, int para){
  int val = dosomething(para); // handle
  res.set_value(val); // Obtain results
}
promise<int> res; // Declaration object
thread t1(thread_func, std::ref(res), 12); // Create thread
t1.join();
std::future<int> fu = res.get_future(); // Get return value
int res = fu.get();
  • Note that, similarly, there is no copy constructor, so you can't copy, you can only move
  • And get_future() can only be executed once

3.5 future_ Statistics enumeration class

  • First, the thread is created with the async() method of 3.1
  • Secondly, this is the member method of the future object - Fu wait_ for(time); There are only three types of return types:
    • std::future_status::timeout
      • Return when execution time > time
      • It will continue to execute, wait at get/wait, or return 0; Waiting at
    • std::future_status::ready
      • End of execution in time
    • std::future_status::deferred
      • Indicates async(std::launch::deferred, fucn) used when creating threads
      • Wait for get/wait to start the thread, otherwise it will not be started
  • Finally, the type of time is std::chrono::seconds(t) – for t seconds
  • To supplement, you can use fu wait_ For (STD:: Chrono:: seconds (0)) to get whether deferred mode is selected for parameterless async() (system resources are tight)

3.6 shared_future class

  • Construct from future object
    • std::shared_future<int> myFu_s(myFu.share());
      • Member function
    • std::shared_future<int> myFu_s(std::move(myFu));
      • Pass right value
    • Then myFu became empty
      • myFu.valid() == false
  • From pakcage_task or promise object construction
    • std::shared_future<int> myFu_s(res.get_future());
      • From package_task construction
    • std::shared_future<int> myFu_s(pkt.get_future());
      • Construct from promise
    • It's automatic type conversion
  • New object myFu_s.get() can be used as many times as possible. It is considered to be a copy, not a move

4. Atomic variables

Header file < atomic >

Main contents:

std::atomic<int> val = 0;
std::atomic<bool> flag = false;
std::atomic<int> val2 = val.load();
val.store(34);
  • Atomic operation
    • A program segment that will not be interrupted in multithreading, that is, an operation:
      • It is either completed or incomplete, and there will be no intermediate state
    • Even if the assembly statement has many lines, ensure that:
      • Either one line is not executed, or all execution is completed, and the middle cannot be interrupted
  • Atomic variable
    • std::atomic<int> val = 0;
    • Then val is an atomic variable. Its direct operations are atomic operations and will not be interrupted by other threads
      • Direct operation: + +, –, + =, - =, & =
      • However, for example, val = val + 1 is not an atomic operation and will be interrupted by other threads
      • However, for example, cout < < Val < < endl is not an atomic operation, but it does not affect the reading of atm values. It's just that the value may have changed when printing
    • Therefore, conflicts at the assembly level can be ignored when operating on atomic variables
  • Difference from mutex:
    • Mutex belongs to lock programming. Generally, a large piece of code is locked to realize the operation of shared data
    • Atomic operation belongs to lock free programming, which is generally a direct operation on a variable
  • Atomic variables do not allow assignment and copy construction, and there are no copy constructors and copy operators
    • However, if you just want to get or write values, you can use:
      • atomic<int> atm2 = atm.load()
      • atm2.store(12);

5. Other contents

5.1 Windows critical zone

  • Code example
#include <windows.h>
using namespace std;
#define __WINDOWSJQ_

#ifdef __WINDOWSJQ_
	CRITICLA_SECTION my_winsec // Critical zone
#endif

class A{
public:
    A();
    void InMsg();
    void OutMsg();
private:
    shared_data sData;
};      

A::A(){
#ifdef __WINDOWSJQ_
    // Critical zone initialization
	InitializeCriticalSection(&my_winsec); 
#endif    
}

void A::InMsg(){
#ifdef __WINDOWSJQ_
    // Enter critical area ~ = lock
	EnterCriticalSection(&my_winsec);
    // Processing shared data
    sData.msgIn();
    // Leave critical zone ~ = unlock
    LeaveCriticalSection(&my_winsec);
#endif    
}

void A::OutMsg(){
#ifdef __WINDOWSJQ_
    int msg;
    // Enter critical area ~ = lock
	EnterCriticalSection(&my_winsec);
    // Processing shared data
    msg = sData.msgOut();
    // Leave critical zone ~ = unlock
    LeaveCriticalSection(&my_winsec);
#endif 
}
  • The same variable is in the same thread
    • It can enter the critical zone for many times and leave for many times
    • But the number of entries should be equal to the number of exits
    • However, mutex in c++11 is not allowed to be locked repeatedly

5.2 thread pool

5.2. 1 server program:

  • General mode:
    • For each client, a thread is created to serve the client (fewer people)
  • Problems:
    • Too many threads will lead to the exhaustion of system resources
      • The limit is 2000, more will collapse
      • Some technical suggestions are number of CPUs, number of CPUs * 2, etc
      • Sometimes it is necessary to make specific contact with the business to determine the quantity
      • Too many threads are far from meeting the requirements, and the scheduling consumption will reduce the efficiency
      • Generally speaking, it should not exceed 500, preferably within 200
    • The number of threads (frequently created and destroyed) varies greatly and is unstable

5.2. 2 thread pool method

  • characteristic:
    • The number of threads is small and the change is small
  • Unified management:
    • Take it from the pool when you use it
    • Put it back after use
    • Do not destroy threads
  • Implementation method:
    • When the program starts, a certain number of threads are created at one time

Topics: C++ Back-end Multithreading