data structure
structural morphology
type waiter struct {
n int64
ready chan<- struct{} // Closed when semaphore acquired.
}
// NewWeighted creates a new weighted semaphore with the given
// maximum combined weight for concurrent access.
func NewWeighted(n int64) *Weighted {
w := &Weighted{size: n}
return w
}
// Weighted provides a way to bound concurrent access to a resource.
// The callers can request access with a given weight.
type Weighted struct {
size int64
cur int64
mu sync.Mutex
waiters list.List
}
A watier represents a request, where n represents the number (weight) of resources requested this time.
Use the NewWeighted() function to create the maximum number of resources accessed concurrently, where n represents the number of resources.
Weighted field description
- size indicates the maximum number of resources. It will decrease when taken away and increase when released
- cur counter, which records the number of currently used resources, with a value range of [0 - size]
- mu lock
- The waiters are currently waiting for the dormant requester goroutine. The number of resources requested by each requester may be different. Only when the number of available resources is insufficient, the requester will enter the request chain list. Each requester represents a goroutine
The counter cur will change with the acquisition and release of resources, so why introduce the concept of quantity (weight)?
Method list
method
- The newwidened method is used to create a class of resources. The parameter n resource represents the total number of maximum available resources;
- Acquire obtains the specified number of resources. If no free resources are available, the current requester goroutine will fall into sleep;
- Release release resources
- TryAcquire is the same as Acquire, but when there are no free resources, it will directly return false without blocking.
Get Acquire and TryAcquire
There are two methods to obtain resources:
and
TryAcquire() , the difference between the two has been introduced above.
You must apply a global lock before obtaining and releasing resources.
When obtaining resources, there are three types according to the idle resources:
- If free resources are available, nil will be returned, indicating success
- The number of requested resources exceeds the total number specified during initialization. This will never be executed successfully, so CTX is returned directly Err()
- The current number of free resources is insufficient. You need to wait for other goroutines to release resources before running. At this time, put the current requester goroutine into the waiting queue. It depends on the situation here. See select judgment for details
// Acquire acquires the semaphore with a weight of n, blocking until resources
// are available or ctx is done. On success, returns nil. On failure, returns
// ctx.Err() and leaves the semaphore unchanged.
//
// If ctx is already done, Acquire may still succeed without blocking.
func (s *Weighted) Acquire(ctx context.Context, n int64) error {
// If there are available resources, nil will be returned directly
s.mu.Lock()
if s.size-s.cur >= n && s.waiters.Len() == 0 {
s.cur += n
s.mu.Unlock()
return nil
}
// The requested resource weight far exceeds the set maximum weight and. If it fails, CTX is returned Err()
if n > s.size {
// Don't make other Acquire calls block on one that's doomed to fail.
s.mu.Unlock()
<-ctx.Done()
return ctx.Err()
}
// If some resources are available, put the requester in the waiting queue (header) and notify other waiters through select
ready := make(chan struct{})
w := waiter{n: n, ready: ready}
// Put it at the end of the linked list and return the placed element
elem := s.waiters.PushBack(w)
s.mu.Unlock()
select {
case <-ctx.Done():
// Receive the external control signal
err := ctx.Err()
s.mu.Lock()
select {
case <-ready:
// Acquired the semaphore after we were canceled. Rather than trying to
// fix up the queue, just pretend we didn't notice the cancelation.
// If the resource has been obtained before the user cancels, this signal is ignored directly and nil is returned to indicate success
err = nil
default:
// After receiving the control information and not obtaining the resources, delete the original added waiter directly
isFront := s.waiters.Front() == elem
// Then delete it from the link above CTX Done()
s.waiters.Remove(elem)
// If the current element is just at the front of the linked list and there are still available resources, notify other waiters
if isFront && s.size > s.cur {
s.notifyWaiters()
}
}
s.mu.Unlock()
return err
case <-ready:
return nil
}
}
Note that there is an operation of adding and unlocking on the select logic statement above. Because it is a global lock in the select, it needs to be locked again.
According to the available counter information, there are three situations:
- about TryAcquire() It's relatively simple. It's a judgment of the number of available resources. If the number is enough, it means that true is returned successfully. Otherwise, false, this method will not block, but directly return.
// TryAcquire acquires the semaphore with a weight of n without blocking.
// On success, returns true. On failure, returns false and leaves the semaphore unchanged.
func (s *Weighted) TryAcquire(n int64) bool {
s.mu.Lock()
success := s.size-s.cur >= n && s.waiters.Len() == 0
if success {
s.cur += n
}
s.mu.Unlock()
return success
}
Release release
It is also very simple to release, that is, update and reduce the number of used resources (counters) and notify other waiters.
// Release releases the semaphore with a weight of n.
func (s *Weighted) Release(n int64) {
s.mu.Lock()
s.cur -= n
if s.cur < 0 {
s.mu.Unlock()
panic("semaphore: released more than held")
}
s.notifyWaiters()
s.mu.Unlock()
}
Notification mechanism
Through the for loop, start from the head of the linked list, the head traverses all waiter s in the linked list in turn, and updates the counter weighted Cur and delete it from the linked list until it is encountered
Number of free resources < Watier N until.
func (s *Weighted) notifyWaiters() {
for {
next := s.waiters.Front()
if next == nil {
break // No more waiters blocked.
}
w := next.Value.(waiter)
if s.size-s.cur < w.n {
// Not enough tokens for the next waiter. We could keep going (to try to
// find a waiter with a smaller request), but under load that could cause
// starvation for large requests; instead, we leave all remaining waiters
// blocked.
//
// Consider a semaphore used as a read-write lock, with N tokens, N
// readers, and one writer. Each reader can Acquire(1) to obtain a read
// lock. The writer can Acquire(N) to obtain a write lock, excluding all
// of the readers. If we allow the readers to jump ahead in the queue,
// the writer will starve — there is always one token available for every
// reader.
break
}
s.cur += w.n
s.waiters.Remove(next)
close(w.ready)
}
}
It can be seen that if there are multiple waiters in a linked list and one of the waiters needs more resources (weights), the current watier
There will be a long-term blocking (even if the currently available resources are enough for other waiters to execute, some resources will be wasted during this period) until there are enough resources for this waiter to execute, and then continue to execute the waiters behind it.
Use example
The official documentation provides a typical "work pool" mode based on semaphores, see[ https://pkg.go.dev/golang.org/x/sync/semaphore#example-
package-
WorkerPool](https://links.jianshu.com/go?to=https%3A%2F%2Fpkg.go.dev%2Fgolang.org%2Fx%2Fsync%2Fsemaphore%23example-
Package worker pool), which demonstrates how to control a certain number of goroutine concurrent work through semaphores.
This is a concurrent pair through semaphores
Collatz conjecture Calculate the number between 1-32 and print 32 values that match the results.
package main
import (
"context"
"fmt"
"log"
"runtime"
"golang.org/x/sync/semaphore"
)
// Example_workerPool demonstrates how to use a semaphore to limit the number of
// goroutines working on parallel tasks.
//
// This use of a semaphore mimics a typical "worker pool" pattern, but without
// the need to explicitly shut down idle workers when the work is done.
func main() {
ctx := context.TODO()
// The weight value is the number of logical CPUs
var (
maxWorkers = runtime.GOMAXPROCS(0)
sem = semaphore.NewWeighted(int64(maxWorkers))
out = make([]int, 32)
)
// Compute the output using up to maxWorkers goroutines at a time.
for i := range out {
// When maxWorkers goroutines are in flight, Acquire blocks until one of the
// workers finishes.
if err := sem.Acquire(ctx, 1); err != nil {
log.Printf("Failed to acquire semaphore: %v", err)
break
}
go func(i int) {
defer sem.Release(1)
out[i] = collatzSteps(i + 1)
}(i)
}
// If the errgroup primitive is used, the following statement is not required
if err := sem.Acquire(ctx, int64(maxWorkers)); err != nil {
log.Printf("Failed to acquire semaphore: %v", err)
}
fmt.Println(out)
}
// collatzSteps computes the number of steps to reach 1 under the Collatz
// conjecture. (See https://en.wikipedia.org/wiki/Collatz_conjecture.)
func collatzSteps(n int) (steps int) {
if n <= 0 {
panic("nonpositive input")
}
for ; n > 1; steps++ {
if steps < 0 {
panic("too many steps")
}
if n%2 == 0 {
n /= 2
continue
}
const maxInt = int(^uint(0) >> 1)
if n > (maxInt-1)/3 {
panic("overflow")
}
n = 3*n + 1
}
return steps
}
It is stated above that the total weight value is the number of logical CPUs, and SEM will be called once every for loop Acquire (CTX, 1), which means that at most one CPU can run
goroutine, if the current weight value is insufficient, other growines will be blocked. There are 32 cycles in total, that is, the maximum number of blocking is 32 maxworkers.
Each time a weight is successfully obtained, the go anonymous function will be executed and the weight will be released at the end of the function. To ensure that every for loop ends normally, the `sem. is finally called. Acquire(ctx,
int64(maxWorkers)), indicating that the weight value that must be obtained for the last execution is maxWorkers. Of course, if you use errgroup`
If you synchronize primitives, this step can be omitted
The following is how to use errgroup
func main() {
ctx := context.TODO()
var (
maxWorkers = runtime.GOMAXPROCS(0)
sem = semaphore.NewWeighted(int64(maxWorkers))
out = make([]int, 32)
)
group, _ := errgroup.WithContext(context.Background())
for i := range out {
if err := sem.Acquire(ctx, 1); err != nil {
log.Printf("Failed to acquire semaphore: %v", err)
break
}
group.Go(func() error {
go func(i int) {
defer sem.Release(1)
out[i] = collatzSteps(i + 1)
}(i)
return nil
})
}
// This will block until all goroutine s are executed
if err := group.Wait(); err != nil {
fmt.Println(err)
}
fmt.Println(out)
}