[Go synchronization primitive]

Posted by avario on Tue, 14 Dec 2021 03:34:01 +0100

In Go language, there are not only easy-to-use and advanced synchronization mechanisms such as channel, but also sync Mutex ,sync.WaitGroup and others compare the original synchronization mechanism. Through them, we can more flexibly control data synchronization and multi process concurrency.

Resource competition

In a goroutine, if the allocated memory is not accessed by other goroutines, then there is no resource competition in the goroutine. However, if the same piece of content is accessed by multiple goroutines at the same time, it will produce unpredictable results without knowing who accesses it first. This is resource competition. This memory can also be called shared resource.

// shared resource
sum := 0

func main () {
    // Open 100 co processes sum+10
    for i:=0; i <100; i++ {
        go add(10)
    }
    
    // Prevent early exit
    tiem.Sleep(2 * time.Second)
    
    fmt.Println("And are:", sum)
}

func add(i int) {
    sum += i
}

// In the example, the result may be 990 or 980, which is not the expected 1000. The core reason for this situation is that sum is not concurrent and safe, because multiple coroutines cross execute sum += i at the same time, resulting in unpredictable results.

Tip: use the command provided by go build, go run and go test, and add the - race flag to help us check whether there is resource competition in the Go language code.

Synchronization primitive

sync.Mutex

Mutex, as the name suggests, means that only one process executes a piece of code at the same time, and other processes can continue to execute only after the process is executed. Modify the above example, declare a mutex, modify the add function, and access the code fragments of shared resources, which is concurrent and safe.

sum: = 0
mutex := sync.Mutex


func add(i int) {
    mutex.Lock()
    defer mutex.Unlock()
    sum += i
}

// The unlocked sum += i code fragment is also called critical area. In the design of synchronization primitives, critical area refers to a code fragment that accesses shared resources, which can not be accessed by multiple collaborations at the same time. When a process enters the critical section, other processes must wait, so as to ensure the concurrency safety of the critical section.

The use of Mutex is very simple. It has only two methods Lock() and Unlock(), which represent locking and unlocking. When one process obtains the Mutex lock, other processes can only obtain the lock again after the Mutex lock is released.

Muetx's Lock and Unlock methods always appear in pairs, and it is necessary to ensure that after Lock obtains the Lock, Unlock must be executed to release the Lock. Therefore, a defer statement will be used in the function or method to release the Lock. This will ensure that all Ding will be released and will not be forgotten.

sync.RWMutex

​ sync.Muetx locks the shared resources to ensure concurrency security. What if the read operation is also multiple coprocesses?

func main() {
    for i := 0; i < 100; i ++ {
        go add(10)
    }
    
    for i := 1; i <10 ; i++ {
        go fmt.Println("And are:", getSum())
    }
    
    time.Sleep(2 * time.Second)
}

func getSum() int {
    b := sum
    return b
}

// Open 10 co processes and read the value of sum at the same time. Because the getSum function does not have any lock control, it is not concurrency safe. That is, when one goroutine performs sum+=i operation, another goroutine may perform b:=sum operation, which will cause the read sum value to be an expired value, and the result is unexpected. To solve the problem of resource competition, you can use mutually exclusive lock sync Mutex

func getSum() int {
    mutex.Lock()
    defer mutex.Unlock()
    b := sum 
    return b
}

Because the add and getSum functions use the same sync Muetx, so their operations are mutually exclusive, that is, when one goroutine performs modification, the other goroutine reads and waits until the modification is completed.

We solved the resource competition problem of goroutine reading and writing at the same time, but we encountered another problem - performance. Because shared resources are locked every time they are read or written, how can we solve the problem under performance?

There are several situations for reading and writing this special scenario:

1. You cannot read at the same time when writing, because dirty data may be read when reading at this time
2. When reading, you can't write at the same time, because it may produce unpredictable results
3. You can read at the same time, because the data will not change, so no matter how many goroutine Reads are concurrent and secure

So you can use sync Rwmutex to optimize this code and improve performance. Use read-write locks to achieve the desired results

var mutex snyx.RWMutex

func getSum() int {
    // Get read / write lock
    mutex.RLock()
    defer mutex.RUnlock()
    b := sum
    return b
}

// In this way, the performance will be greatly improved, because multiple goroutine s can read data at the same time without waiting for each other.

sync.WaitGroup

We used time in the above example The sleep function is to prevent the main function main() from returning. Once the main function returns, the program will exit. Tip: the return of a function or method means that the current function has been executed.

For example, we execute 100 add and 10 getSum coprocesses. We don't know when the execution is completed, so we set a long waiting time, but there is a problem. If the 110 coprocesses are completed soon, the main function should return in advance, but we have to wait a while to return, which will cause performance problems. However, if the cooperation process is not completed when the waiting time expires, the program will exit in advance, resulting in the incomplete execution of some cooperation processes and unpredictable results.

How to solve this problem? Is there any way to monitor the execution of all processes? Once all processes are executed, the program will exit immediately, which can not only ensure that all processes are executed, but also exit in time, save time and improve performance. Channel can solve this problem, but it is very complex. The Go language provides a more concise method, which is sync WaitGroup.

func run() {
    var wg sync.WaitGroup
    wg.Add(110)
    
    for i := 0; i< 100; i ++ {
        go func() {
          // Counter minus 1
            defer wg.Done()
            add(10)
        }()
    }
    
    for i := 1; i < 10 ; i++ {
        go func() {
            defer wg.Done()  
            fmt.Println("sum is", getSum())
        }()
    }
    
    // Wait until the counter value is 0
    wg.Wait()
}

sync.WaitGroup is easy to use. It is divided into three steps:

  1. Declare a sync Waitgroup, and then set the counter value through the Add method. You can set as many coprocesses as you need to track
  2. When each coroutine is completed, call the Done method to decrease the counter by 1 and tell sync Waigroup has completed the process
  3. Finally, the Wait method is called until the counter value is 0, that is, all tracking cooperations are completed.

Via sync Waitgroup can track the collaboration process well. The whole function will be executed only after the execution of the coroutine is completed. The time is just the time when the coroutine is executed.

sync.WaitGroup is suitable for coordinating multiple collaborative processes to do one thing together. For example, download a file. Suppose 10 collaborative processes are used, and each collaborative process is written in 1 / 10. Only 10 collaborative processes are downloaded, and the whole file can be downloaded. This is what we often hear about multi-threaded downloading. Doing one thing through multiple threads significantly improves efficiency. Tip: we can understand the co process of Go language as a common thread. It is not impossible in terms of user experience, but they are different in terms of technical implementation.

sync.Once

In practical work, there may be a need to let the code execute only once, even in the case of high concurrency, such as creating a singleton. In this case, the Go language provides us with sync Once to ensure that the code is executed only once.

func main() {
    doOnce()
}

func doOnce() {
    var once sync.Once 
    onceBody := func() {
        fmt.Println("Only once")
    }
    
    // Used to wait for the execution of the collaboration process to complete
    done := make(chan bool)
    
    // Start 10 processes to execute once Do(onceBody)
    for i := 1; i < 10; i++ {
        go func(){
            go func() {
                // Pass the function (method) to be executed as a parameter to once Do method is enough
                once.Do(onceBody)
                done <- true
            }
        }()
    }
    
    for i := 10; i < 10 ; i++{
        <- done
    }
}

This is an example of the Go language itself. Although 10 coprocedures are started to execute the onceBody function, it uses once Do method, so the function onceBody will be executed only once. That is, in the case of high concurrency, sync Once also ensures that the onceBody function is executed only once.

​ sync.Once is suitable for creating a single instance of an object, resources that are loaded only once, and scenes that are executed only once

sync.Cond

In the Go language, sync Waitgroup is used for the final completion scenario. The key point is to wait for all collaboration processes to be executed. And sync Cond can be used to give orders. All collaboration processes can be executed at the first command. The key point is that the collaboration process starts waiting, waiting for sync Cond wakes up before execution.

​ sync.Cond literally means conditional variable. It has the functions of blocking and waking up the coprocessor. All coprocessors can wake up when certain conditions are met, but the conditional variable is only one of its use scenarios.

// Ten people race and one referee gives orders
func race() {
    cond := sync.NewCond(&sync.Mutex{})
    var wg sync.WaitGroup
    wg.Add(11)
    for i := 1; i < 10; i++ {
        go func(num int) {
            defer wg.Done() 
            fmt.Println(num, "Number one is in place")
            cond.L.Lock()
            cond.Wait() // Wait for the starting gun
            fmt.Println(num, "Start running on the 8th")
            cond.L.Unlock()
        }(i)
    }
    
    time.Sleep(2 * time.Second)
    go func(){
        defer wg.Done()
        fmt.Println("The referee is in position for the starting gun")
        fmt.Println("The race began and everyone began to run")
        cond.Broadcast() // Firing gun
    }()
    
    wg.wait()
}

General steps:

  1. Via sync The newcond function generates a * sync Cond, used to block and wake up the process
  2. Then you start 10 people to simulate 10 people, and call cond. after getting ready. The wait () method blocks the current cooperation process and waits for the starting gun to sound. Note that cond. Is called here The wait () method is to lock
  3. time.Sleep is used to wait for everyone to enter the wait state, so that the referee can call cond Broadcast () gives orders
  4. When the referee is ready, you can call cond Broadcast tells everyone to start running

sync.Cond has three methods:

  1. Wait, block the current orchestration until it is awakened by another orchestration calling the Broadcast or Signal method, lock it when using it, and use sync Just the lock in cond, that is, the L field
  2. **Signal * * wakes up a process with the longest waiting time
  3. Broadcast, wake up all waiting processes

Note: before calling signal or Broadcast, ensure that the target collaboration is in the Wait blocking state, or deadlock will occur.

Summary: we have learned about the use of synchronization primitives in Go language, through which we can more flexibly control the concurrency of multi processes. In terms of usage, Go language still recommends channel, a higher-level concurrency control method, because it is simpler and easier to understand and use. Synchronization primitives are usually used for more complex concurrency control. They can be used if more flexible control methods and performance are pursued.

Topics: Go Back-end lock