03 reworking concurrent practice of C + + 6

Posted by stuartbates on Mon, 03 Jan 2022 07:50:00 +0100

03 reworking concurrent practice of C + + 6

Previous: 03 reworking concurrent practice of C + + 5


[design lock based concurrent data structure]

6.1 meaning of concurrent design

At the most basic level, designing a data structure for concurrency means that multiple threads can use the data structure at the same time to perform the same or different operations, and each thread has a consistent view of the data structure. Without losing or destroying data, maintaining all invariants, and without uncertain competition conditions, this data structure is called thread safe. Usually, a data structure is secure only under specific concurrent access. It is very likely that multiple threads perform different operations, and it is safe for them to access a data structure concurrently. However, if multiple threads perform the same operation, they may have problems accessing a data structure concurrently.

In fact, concurrent design is far more than providing concurrent opportunities for multiple threads to access data structures. In essence, the mutex provided by the mutex allows only one thread to obtain the lock of the mutex at a time. A mutex protects a data structure by explicitly preventing concurrent access to the data it protects. This is called serialization: multiple threads access mutex protected data in turn, and they must access data linearly and non concurrently.

Guidelines for designing data structures for concurrency

The basic principle of making data structures thread safe.

  • Ensure that the current data structure invariance is destroyed by other threads and the state is not seen by any other threads.
  • Pay attention to avoid inherent competition in data structure interfaces by providing functions for complete operations rather than operation steps.
  • Note how to ensure that the invariance of the data structure is not destroyed when exceptions occur.
  • When using data structures, the chance of production deadlock is reduced by limiting the scope of locks and avoiding the use of nested locks.

Before considering these problems, we have to think about another question. If a function uses a data structure through a special function, is it safe for other threads to call that function? Because most constructors and destructors need to access data structures exclusively, users need to ensure that they are not used before the constructor is completed or after the destructor starts. If the data structure supports assignment, swap(), or copy construction, the designer of the data structure needs to decide whether these operations are safe when called together with other operations, or whether the user needs to ensure mutual exclusion, even if most of the functions operating the data structure may be accessed by multiple threads at the same time. The second problem to be considered is to realize real concurrent access:

  • Whether the scope of the lock can be limited so that part of an operation can be performed outside the lock.
  • Whether different parts of the data structure can be protected by different mutually exclusive elements.
  • Whether all operations require the same level of protection.
  • Can a small change in data structure improve the chance of concurrency without affecting the operation semantics.

A brief summary of the above ideas: in the case of inevitable serialization, minimize the scope of locks and maximize concurrency.

6.2 lock based concurrent data structure

The key to designing a lock based variant data structure is to ensure that the correct mutex is locked when accessing data, and to minimize the locking time. It is difficult to protect a data structure with only one mutex. You should ensure that this data will not be accessed outside the lock protection area, and interface contention will not occur. If independent mutually exclusive elements are used to protect independent parts of the data structure, the problem will become more complex, and deadlock may occur if multiple mutually exclusive elements are locked for operations on the data structure. Therefore, when designing a data structure with multiple mutually exclusive elements, it needs to be considered more carefully than designing a data structure with only one mutually exclusive element.

Thread safe queues using locks and condition variables

The following implementation uses STD:: shared_ PTR < T > share pointers to hold data.

#ifndef _QUEUE_P_H_
#define _QUEUE_p_H_

#include <queue>
#include <mutex>
#include <condition_variable>

template <typename T>
class threadeafe_queue
{
private:
    mutable std::mutex mut;
    std::queue<std::shared_ptr<T> > data_queue;
    std::condition_variable data_cond;
public:
    threadeafe_queue() {}
    threadeafe_queue(threadeafe_queue &) = delete;
    threadeafe_queue(threadeafe_queue &&) = delete;
    threadeafe_queue &operator=(threadeafe_queue &) = delete;
    threadeafe_queue &operator=(threadeafe_queue &&) = delete;
    virtual ~threadeafe_queue() {}
    
    void wait_and_pop(T& value)
    {   
        std::unique_lock<std::mutex> lk(mut);
        data_cond.wait(lk, [this](){return !data_queue.empty();});
        value = std::move(*data_queue.front()); // [1]
        data_queue.pop();
    }
    
    int try_pop(T& value)
    {
        std::lock_guard<std::mutex> lk(mut);
        if (data_queue.empty())
            return -1;
        value = std::move(*data_queue.front()); // [2]
        data_queue.pop();
        return 0;
    }

    std::shared_ptr<T> wait_and_pop()
    {
        std::unique_lock<std::mutex> lk(mut);
        data_cond.wait(lk, [this](){return !data_queue.empty();});
        std::shared_ptr<T> res = data_queue.front(); //[3]
        data_queue.pop();
        return res;
    }

    std::shared_ptr<T> try_pop()
    {
        std::lock_guard<std::mutex> lk(mut);
        if (data_queue.empty())
            return std::shared_ptr<T>();
        
        std::shared_ptr<T> res = data_queue.front(); // [4]
        data_queue.pop();
        return res;
    }

    void push(T new_value)
    {
        std::shared_ptr<T> data(
            std::make_shared<T>(std::move(new_value))); // [5]
        std::lock_guard<std::mutex> lk(mut);
        data_queue.push(data);
        data_cond.notify_one();
    }

    bool empty()
    {
        std::lock_guard<std::mutex> lk(mut);
        return data_queue.empty();
    }

};

#endif //_QUEUE_P_H_

Through STD:: shared_ PTR < T > the basic effect of sharing pointers to hold data is intuitive: a pop function that accepts a reference to a variable to obtain a new value must now dereference the stored pointers [1], [2] and return an STD:: shared_ The pop function of PTR < T > instance can obtain this number [3], [4] from the queue before returning the call.

If the data is shared by STD:: shared_ PTR < T > is held by sharing pointers, so there is an additional advantage: the allocation of new instances can be completed outside the lock of push() [5], otherwise it can only be done when pop() holds the lock. Because memory allocation is usually a very expensive operation, this method helps to improve the performance of the queue, because it reduces the time to hold mutually exclusive elements and allows other threads to perform operations on the queue at the same time.

Thread safe queues using fine-grained locks and condition variables

The simplest data structure to implement the queue is a single linked list. The queue contains a head pointer to the first item of the linked list, and each item points to the next item. When moving data from the queue, first point the header pointer to the next data item, and then return the data item pointed to by the previous header pointer. The data item is added to the other end of the queue. In order to achieve this, the queue also contains a tail pointer pointing to the last item in the linked list. You can add nodes by pointing the next of the last item to the new node, and then updating the tail pointer to point to the new item. When the linked list is empty, the head pointer and tail pointer are NULL.

However, trying to use fine-grained locks in a multithreaded environment can cause problems. Suppose there are two data items (head and tail). In principle, two mutually exclusive elements can be used, one to protect the head and the other to protect the tail. However, there will be some problems. The most obvious problem is that push () can modify head and tail, so it needs to lock these two mutually exclusive elements. Unfortunately, it's not a big problem, because it's possible to lock two mutually exclusive elements. The key problem is that both push () and pop () need to access the next pointer of a node. push() updates tail - > next, pop() reads head - > next. If the queue has only one node, the two next pointers point to the same object, so they need to be protected. Since it is impossible to tell whether the head and tail are the same object before they are read, push () and pop () must lock the same mutex.

In order to solve this problem, we can separate data and allow concurrency. We can pre allocate a puppet node to store data to ensure that there will always be at least one node in the queue and separate the first and last accesses. For an empty queue, both head and tail point to the dummy node instead of NULL. This is no problem, because if the queue is empty, pop() will not access head - > next. When a new node joins the queue (so there is a real node), the head and tail point to different nodes, so there is no competition. The disadvantage is that an additional indirection layer must be added to store data through pointers in order to allow false nodes.

The following is the thread safe queue using the puppet node:

#ifndef _TS_QUEUE_PUPPET_H_
#define _TS_QUEUE_PUPPET_H_

#include <algorithm>
#include <memory>
#include <mutex>

template<typename T>
class ThreadSafeQueue
{
private:
    struct node //Node data structure
    {
        std::shared_ptr<T> data;
        std::unique_ptr<node> next;
    };

    std::mutex head_mutex; //Head lock
    /* The header pointer uses unique_ptr to avoid memory leakage */
    std::unique_ptr<node> head; //Head pointer
    std::mutex tail_mutex; //Tail lock
    node* tail; //Tail pointer
    /* Conditional variable */
    std::condition_variable data_cond;

private: //Internal private method
    node* get_tail()
    {	/* Get tail lock return tail pointer */
        std::lock_guard<std::mutex> tail_lock(tail_mutex);
        return tail;
    }

    std::unique_ptr<node> pop_head()
    {	/* Out of line returns the old header pointer */
        std::unique_ptr<node> old_head = std::move(head);
        head = std::move(old_head->next);
        return old_head;
    }

    std::unique_lock<std::mutex> wait_for_data()
    {	/* Waiting for data header lock waiting for data notification to transfer ownership of the lock */
        std::unique_lock<std::mutex> head_lock(head_mutex);
        data_cond.wait(head_lock, [&] { return head.get() != get_tail(); }); // [1]
        return std::move(head_lock);
    }

    std::unique_ptr<node> try_pop_head()
    {	/* Non blocking pop head */
        std::lock_guard<std::mutex> head_lock(head_mutex); //Head lock
        if (head.get() == get_tail()) // [2]
        {	// If head == tail, it is regarded as empty
            return nullptr; //Return null pointer
        }
        return pop_head(); //Non empty queue returns the old header pointer
    }

    std::unique_ptr<node> wait_pop_head()
    {	/* Blocking pop head */
        std::unique_lock<std::mutex> head_lock(wait_for_data()); //Wait for data and acquire lock
        return pop_head(); //Out of line returns the old header pointer
    }
    
public: //Common method
    ThreadSafeQueue():head(new node()), tail(head.get())
    { }
    ThreadSafeQueue(ThreadSafeQueue &&) = default;
    ThreadSafeQueue(const ThreadSafeQueue &) = delete;
    ThreadSafeQueue &operator=(ThreadSafeQueue &&) = default;
    ThreadSafeQueue &operator=(const ThreadSafeQueue &) = delete;
    virtual ~ThreadSafeQueue()
    { }

    void push(T new_value)
    {	//Construct shared data_ ptr
        std::shared_ptr<T> new_data(std::make_shared<T>(std::move(new_value)));
        //Construct a new puppet node and obtain its unique_ptr
        std::unique_ptr<node> p(new node());
        
        { //Internal code block data population
            std::lock_guard<std::mutex> tail_lock(tail_mutex); //Acquire lock
            node* const new_tail = p.get(); //Get dummy node pointer
            tail->data = new_data; //data 
            tail->next = std::move(p); //Connect the new puppet node to
            tail = new_tail; //Tail pointer backward
        } //The lock is automatically released at the end of the code block
        data_cond.notify_one(); //Notification condition variable
    }
    std::shared_ptr<T> try_pop()
    {	//Call internal methods to dequeue
        std::unique_ptr<node> old_head = try_pop_head();
        //return
        return old_head ? old_head->data : std::shared_ptr<T> ();
    }

    std::shared_ptr<T> wait_and_pop()
    {	//Call internal methods to dequeue
        std::shared_ptr<node> const old_head = wait_pop_head();
        //return
        return old_head->data;
    }

    bool empty()
    {
        std::lock_guard<std::mutex> head_lock(head_mutex);
        return (head.get() == get_tail());
    }

};

#endif // _TS_QUEUE_PUPPET_H_

Notes have been added to most of the details. Now it should be noted that [1] and [2] imply the order of obtaining locks. You must obtain the lock of the head pointer first and then the lock of the tail pointer. This is to ensure the safety of the data structure. If the lock is obtained in turn, it may cause the old tail pointer to be obtained outside the head pointer lock in multiple threads, which may misjudge whether the queue is empty, resulting in data confusion and even data structure damage.

6.3 design more complex lock based data structure

6.3.1 write a thread safe lookup table using locks

A lookup table or dictionary associates a value of one type (key type) with another value of the same or different type (mapping type). Generally speaking, the purpose of this data structure is to enable code to query relevant data with a given key value. In the C + + standard library, this function is realized by using associated containers, for example, STD:: Map < > STD:: Multimap < > STD:: unordered_ Map < > and STD:: unordered_ multimap<>. From the perspective of concurrency, the biggest problem with the STD:: Map < > interface is the iterator. Although it is possible to have an iterator that can safely access the container when other threads access (and modify) the container, this is tricky. Correctly grasping the iterator requires dealing with the following problems, such as another thread deleting the element referenced by the iterator, which is troublesome. As the first interface to be cut off by the field security lookup table, it should be an iterator. The interface of STD:: Map < > (and other associated containers in the standard library) is largely based on iterators, so it may be worth redesigning the interface.

Some basic operations of lookup table:

  • Add a new key value pair;
  • Changing the value associated with a given key;
  • Delete the key and its associated value;
  • Gets the value, if any, associated with the given key.

Some container wide operations are also useful, such as checking whether the container is empty, a snapshot of a complete list of keys, or a snapshot of a complete collection of key value pairs. Once the interface is determined (assuming that there is no interface competition condition), the lower data structure can be protected by using a mutex and a simple lock in each member function to ensure thread safety. However, this wastes the possibility of reading the data structure through independent functions and modifying the concurrency it provides. One way is to use a mutex that supports multiple read threads or one write thread. Although this method can provide the possibility of high concurrent access, only one thread can modify the data structure at a time.

Design a MAP data structure with fine-grained lock

In order to allow fine-grained locks, like the thread safe queue implemented earlier, we need to consider the details of the structure. We usually implement an associated container similar to a lookup table by the following three methods.

  • Binary trees, such as red and black trees;
  • Sorted array;
  • Hash table.

Binary tree cannot provide too much space for expanding concurrency. Each search or modification must start from the accessed root node, so the root node must be locked. Although the lock will be released when the current thread moves down the data tree, it is not much better than locking the entire data structure. Sorted arrays are even worse because you can't know in advance where a given data structure is in the array, so you need to lock the entire array at one time.

Only the hash table is left. Suppose there are a certain number of buckets, and the bucket to which a key belongs depends entirely on the key and the characteristics of the hash function. This means that it is safe to have an independent lock on each barrel. If you reuse the mutex that supports multiple reads or single writes, you will increase the concurrency opportunity by N, where n is the number of buckets. The disadvantage is to find a good hash function for the key. The C + + standard library provides STD:: hash < > Templates, which can be used to achieve this purpose. It has been specialized for basic types such as int and common library types such as std::string, and users can easily specialize it for other key types. If you follow the standard unordered container and take the function object type as the parameter template parameter during hash calculation, you can choose whether to specialize STD:: hash < > for the key type or provide a separate hash function.

[implementation]

#ifndef _THREADSAFE_LOOKUP_TABLE_H_
#define _THREADSAFE_LOOKUP_TABLE_H_

#include <functional>
#include <list>
#include <boost/thread/shared_mutex.hpp>
#include <mutex>
#include <vector>
#include <iterator>

template<typename Key, typename Value, typename Hash = std::hash<Key> >
class threadsafe_lookup_table
{
private:
    //===========================================Inner class ========================================================= start
    class bucket_type //Internal bucket data structure
    {
    private:
        typedef  std::pair<Key, Value> bucket_value; //In bucket data key value pair type
        typedef  std::list<bucket_value> bucket_data; //list type of bucket memory key value pair
        typedef typename bucket_data::iterator bucket_iterator; //Iterator type corresponding to list

        bucket_data data; //list for storing data
        //Each bucket is shared by itself_ Mutex, protect it
        mutable boost::shared_mutex mutex; // [1]
        
        //Find the key value pair storing the corresponding key in the bucket and return the iterator
        bucket_iterator find_entry_for(Key const& key) // const [2]
        {
            return std::find_if(data.begin(), data.end(),
                        [&](bucket_value const& item) 
                            { return item.first == key;});
        }
    public:
        //Return the value of the corresponding key. If there is no key, return the default value
        Value value_for(Key const& key, Value const& default_value) // Const (the original text has const modification here)
        {
            boost::shared_lock<boost::shared_mutex> lock(mutex); // [3] Read shared lock
            bucket_iterator const found_entry = find_entry_for(key);
            return (found_entry == data.end()) ? default_value : found_entry->second;
        }
		//Add key value pairs or update
        void add_or_update_mapping(Key const& key, Value const& value)
        {
            std::unique_lock<boost::shared_mutex> lock(mutex); // [4] Write exclusive lock
            bucket_iterator const found_entry = find_entry_for(key);
            if (found_entry == data.end()) //No key - > Add
            {
                data.push_back(bucket_value(key, value));
            }
            else //Key - > Update
            {
                found_entry->second = value;
            }
        }
        //Delete key value pair
        void remove_mapping(Key const& key)
        {
            std::unique_lock<boost::shared_mutex> lock(mutex); //[5] Write exclusive lock
            bucket_iterator const found_entry = find_entry_for(key);
            if (found_entry == data.end())
            {
                data.erase(found_entry);
            }
        }
    };
    //===========================================Inner class ========================================================= end
    
	// This implementation uses buckets to hold buckets and allows the number of buckets to be specified in the constructor (the default value here is 19)
    // Hash tables work best with prime buckets. Each bucket can allow many concurrent reads or a single call to a modification function.
    std::vector<std::unique_ptr<bucket_type> > buckets; //[6]
    // Hash function instance
    Hash hasher;
    
    //Returns the bucket where the data of the corresponding key is located
    bucket_type& get_bucket(Key const& key) const //[7] Because the number of buckets is fixed, get is called_ The bucket does not need to be locked
    {
        std::size_t const bucket_index = hasher(key) % buckets.size();
        return *buckets[bucket_index];
    }

public:
    typedef Key key_type; 
    typedef Value mapped_type;
    typedef Hash hash_type;

    //The constructor initializes 19 buckets by default, and the default STD:: hash < key > class
    threadsafe_lookup_table(unsigned num_buckets = 19, Hash const& hasher_ = Hash()) :
        buckets(num_buckets), hasher(hasher_)
    {
        for (unsigned i = 0; i < num_buckets; ++i)
        {
            buckets[i].reset(new bucket_type);
        }
    }
    threadsafe_lookup_table(threadsafe_lookup_table &&) = default;
    threadsafe_lookup_table(const threadsafe_lookup_table &) = delete;
    threadsafe_lookup_table &operator=(threadsafe_lookup_table &&) = default;
    threadsafe_lookup_table &operator=(const threadsafe_lookup_table &) = delete;
    ~threadsafe_lookup_table() { }
	//External public interface
    Value Value_for(Key const& key, Value const& default_value = Value()) const
    {
        return get_bucket(key).value_for(key, default_value);
    }

    void add_or_update_mapping(Key const& key, Value const& value)
    {
        get_bucket(key).add_or_update_mapping(key, value);
    }

    void remove_mapping(Key const& key)
    {
        get_bucket(key).remove_mapping(key);
    }
    
};

#endif // !_THREADSAFE_LOOKUP_TABLE_H_

Adding, deleting and modifying data in each bucket requires locking the bucket lock (shared or exclusive), and each bucket itself is independent, so the above lookup table has relatively high concurrency performance. Special note: the above code has several comments * * "const" (modified by const in the original text). If const is not removed, compilation errors will be caused. Here is an explanation:

Error content: iterator type does not match.

Reason: begin and end in STD:: List < T > in the standard library have two overloads. The specific form is as follows:

iterator begin() noexcept; //(from C++11) 
const_iterator begin() const noexcept; //(from C++11) 
const_iterator cbegin() const noexcept; //(from C++11) 

iterator end() noexcept; //(from C++11) 
const_iterator end() const noexcept; //(from C++11) 
const_iterator cend() const noexcept; //(from C++11) 

If you add const after the function name according to the code written in the original book, const will take precedence over the data type in the whole curly bracket by default. Therefore, some begin() will return const_ The iterator type instead of the iterator causes a type mismatch, or the content cannot be modified later through the iterator.

6.2.3 write a thread safe list using locks

Linked list is the most basic data structure, so it should be thread safe, shouldn't it? Well, it depends on what kind of function you pursue and need to provide iterator support. This is what this book has always avoided adding iterators to the basic diagram because it is too complex. STL style iterator support means that the iterator must hold some reference to the internal data structure of the container. If the container can be modified by another thread, the reference must still be valid, which fundamentally requires the iterator to hold locks on some data structures. Considering that the generation time of STL style iterators is completely out of control of the container, this is not a good idea.

Another way is to provide something similar to for_ Iterator functions such as each are part of the container itself. This leaves the container fully responsible for iterators and locks, but this conflicts with the deadlock avoidance principle mentioned in Chapter 3. To make for_ For each to do some useful operations, it must call the user provided code when holding the internal lock. Moreover, in order to make the user provided code available for data items, it must pass the reference to each data item to the user provided code. You can avoid this by passing a copy of each data item to the code provided by the user, but when the data item is large, this method is very resource-consuming.

Therefore, at present, we leave it to users to ensure that they will not cause deadlock due to obtaining a lock in the operation provided by the user, and avoid data competition by storing references in the access outside the lock. As far as the linked list used by the lookup table is concerned, it is completely safe because it will not do any improper operation.

The following is a list of operations required for a linked list

  • Add a new item to the linked list;
  • Delete items that meet certain conditions from the linked list;
  • Find items that meet certain conditions in the linked list;
  • Update projects that meet certain conditions;
  • Copy each item in the linked list to another container.
#ifndef _THREADSAFE_LIST_H_
#define _THREADSAFE_LIST_H_

#include <memory>
#include <mutex>
#include <utility>

template <typename T>
class threadsafe_list
{

private:
    struct node
    {
        std::mutex m;
        std::shared_ptr<T> data;
        std::unique_ptr<node> next;

        node() : next(nullptr) { }
        node(T const& value) : data(std::make_shared<T>(value)){ } 
        node(node && old_node): data(std::move(old_node.data)), next(std::move(old_node.next)) { }
    };

    node head;
public:
    threadsafe_list()
    {

    }
    threadsafe_list(threadsafe_list &&) = default;
    threadsafe_list(const threadsafe_list &) = delete;
    threadsafe_list &operator=(threadsafe_list &&) = default;
    threadsafe_list &operator=(const threadsafe_list &) = delete;
    ~threadsafe_list()
    {
        remove_if([](node const&){return true;});
    }

    void push_front(T const& value) 
    {
        std::unique_ptr<node> new_node(new node(value));
        std::lock_guard<std::mutex> lock(head.m);
        new_node->next = std::move(head.next);
        head.next = std::move(new_node); //There will be some mistakes here, but I can't find a solution
    }

    template<typename Function>
    void for_each(Function f)
    {
        node *current = &head;
        std::unique_lock<std::mutex> lock(head.m);
        while (node* const next = current->next.get())
        {
            std::unique_lock<std::mutex> next_lock(next->m);
            lock.unlock();
            f(*next->data);
            current = next;
            lock = std::move(next_lock);
        }
    }

    template<typename Predicate>
    std::shared_ptr<T> find_first_if(Predicate p)
    {
        node *current = &head;
        std::unique_lock<std::mutex> lock(head.m);
        while (node* const next = current->next.get())
        {
            std::unique_lock<std::mutex> next_lock(next->m);
            lock.unlock();
            if(p(*next->data))
            {
                return next->data;
            }           
            lock = std::move(next_lock);
        }
        return std::shared_ptr<T>();
    }

    template<typename Predicate>
    void remove_if(Predicate p)
    {
        node *current = &head;
        std::unique_lock<std::mutex> lock(head.m);
        while (node* const next = current->next.get())
        {
            std::unique_lock<std::mutex> next_lock(next->m);
            lock.unlock();
            if (p(*next->data))
            {
                std::unique_ptr<node> old_next = std::move(next->next);
                next_lock.unlock();
            }
            else
            {
                lock.unlock();
                current = next;
                lock = std::move(next_lock);
            }

        }        
    }
    
};


#endif // !_THREADSAFE_LIST_H_

Error message: (Note: I linked the pthread dynamic library)

(gdb) bt
#0  threadsafe_list<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::push_front (this=0x7fffffffdbf0, value="a")
    at /home/wangs7/VSCode/Concurrency/6th_chapter/threadsafelist/threadsafe_list.hpp:40
#1  0x0000000000408325 in main (argc=1, argv=0x7fffffffdd58) at /home/wangs7/VSCode/Concurrency/6th_chapter/threadsafelist/main.cpp:11
(gdb) c
Continuing.

Breakpoint 2, threadsafe_list<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::push_front (this=0x7fffffffdbf0, value="a")
    at /home/wangs7/VSCode/Concurrency/6th_chapter/threadsafelist/threadsafe_list.hpp:41
41              new_node->next = std::move(head.next);
(gdb) c
Continuing.

Breakpoint 3, threadsafe_list<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::push_front (this=0x7fffffffdbf0, value="a")
    at /home/wangs7/VSCode/Concurrency/6th_chapter/threadsafelist/threadsafe_list.hpp:42
42              head.next = std::move(new_node);
(gdb) c
Continuing.
terminate called after throwing an instance of 'std::system_error'
  what():  Operation not permitted

Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50        return ret;
(gdb) q
A debugging session is active.

        Inferior 1 [process 9419] will be killed.

Let's start with this part. I'll take a look at this problem later when I have time. Or if the big man knows the problem, he can give me some advice qwq.

Topics: C++ Back-end