Simpler concurrent code, stronger concurrency control

Posted by iamngk on Tue, 08 Mar 2022 11:10:39 +0100

Do you feel that Go's sync package is not enough? Have you ever encountered a type without sync/atomic support?

Let's take a look at some value-added additions of go zero's syncx package to the standard library.

name effect
AtomicBool bool type atomic class
AtomicDuration Duration related atomic class
AtomicFloat64 float64 type atomic class
Barrier Fence [lock and unlock the package]
Cond Conditional variable
DoneChan Graceful notification off
ImmutableResource Resources that will not be modified after creation
Limit Number of control requests
LockedCalls Ensure serial calls to methods
ManagedResource resource management
Once Provide once func
OnceGuard Disposable resource management
Pool Pool, simple pool
RefResource Reference counted resources
ResourceManager Resource Manager
SharedCalls Functions similar to singlight
SpinLock Spin lock: spin + CAS
TimeoutLimit Time limit control

Let's introduce the above library components respectively.


Because there is no generic support, there are many types of atomic class support. The following uses float64 as an example:

func (f *AtomicFloat64) Add(val float64) float64 {
	for {
		old := f.Load()
		nv := old + val
		if f.CompareAndSwap(old, nv) {
			return nv

func (f *AtomicFloat64) CompareAndSwap(old, val float64) bool {
	return atomic.CompareAndSwapUint64((*uint64)(f), math.Float64bits(old), math.Float64bits(val))

func (f *AtomicFloat64) Load() float64 {
	return math.Float64frombits(atomic.LoadUint64((*uint64)(f)))

func (f *AtomicFloat64) Set(val float64) {
	atomic.StoreUint64((*uint64)(f), math.Float64bits(val))
  • Add(val): if CAS fails, keep for loop retry, get old val, and set old+val;

  • Compare and swap (old, new): call CAS of underlying atomic;

  • Load(): call atomic Loaduint64, and then convert

  • Set(val): call atomic StoreUint64

As for other types, if developers want to expand the type they want, they can basically call the original atomic operation according to the above, and then convert it to the required type. For example, when encountering bool, they can use 0 and 1 to distinguish the corresponding false and true.


The lock is added to the operation of Barrier to prevent the developer from forgetting to unlock the operation of Barrier

func (b *Barrier) Guard(fn func()) {
	defer b.lock.Unlock()
  // Own business logic


This data structure and Limit together form TimeoutLimit. Here we will talk about these three together:

func NewTimeoutLimit(n int) TimeoutLimit {
	return TimeoutLimit{
		limit: NewLimit(n),
		cond:  NewCond(),

func NewLimit(n int) Limit {
	return Limit{
		pool: make(chan lang.PlaceholderType, n),
  • limit here is the buffered channel;
  • cond is unbuffered;

Therefore, it is understood here in combination with the name: because Limit limits the use of a certain resource, it is necessary to put a preset number of resources into the resource pool in advance; Cond is similar to a valve, which needs to be prepared on both sides for data exchange, so it uses bufferless and synchronous control.

Here, let's take a look at the session management in stores/mongo to understand resource control:

func (cs *concurrentSession) takeSession(opts ...Option) (*mgo.Session, error) {
  // Option parameter injection
  // See if resources can be retrieved from limit
	if err := cs.limit.Borrow(o.timeout); err != nil {
		return nil, err
	} else {
		return cs.Copy(), nil

func (l TimeoutLimit) Borrow(timeout time.Duration) error {
  // 1. If there are still resources in the limit, take out one and return
	if l.TryBorrow() {
		return nil
	// 2. If the resources in limit have been used up
	var ok bool
	for {
    // Only cond can take out one [no cache, and only cond & lt; - this item can pass]
		timeout, ok = l.cond.WaitWithTimeout(timeout)
    // Try to get a [when cond above passes, a resource will be returned]
    // See ` Return()`
		if ok && l.TryBorrow() {
			return nil
		// Timeout control
		if timeout <= 0 {
			return ErrTimeout

func (l TimeoutLimit) Return() error {
  // Return to a resource
	if err := l.limit.Return(); err != nil {
		return err
	// Synchronously notify another coordination process requiring resources [realize valve and two-party exchange]
	return nil

resource management

There is also ResourceManager in the same folder, which is similar in name. Here we will explain the two components together.

First, from the structure:

type ManagedResource struct {
  // resources
	resource interface{}
	lock     sync.RWMutex
  // The logic of generating resources is controlled by the developer
	generate func() interface{}
  // Comparative resources
	equals   func(a, b interface{}) bool

type ResourceManager struct {
  // Resources: I/O can be seen here,
	resources   map[string]io.Closer
	sharedCalls SharedCalls
  // Mutually exclusive access to resource map
	lock        sync.RWMutex

Next, let's look at the method signature for obtaining resources:

func (manager *ResourceManager) GetResource(key, create func() (io.Closer, error)) (io.Closer, error)

// Obtain a resource (if any, obtain it directly, without generating one)
func (mr *ManagedResource) Take() interface{}
// Judge whether this resource does not meet the incoming judgment requirements. If not, reset it
func (mr *ManagedResource) MarkBroken(resource interface{})
  1. The resource manager uses SharedCalls to make anti duplication requests and caches resources in the internal sourMap; In addition, the incoming create func is related to the IO operation, which is commonly used in the cache of network resources;

  2. The ManagedResource cache resource does not have a map, but a single interface. There is only one description, but it provides Take() and pass in generate() instructions to allow developers to update the resource themselves;

Therefore, in terms of purpose:

  • Resource Manager: used for the management of network resources. Such as: database connection management;
  • ManagedResource: used for some changing resources, which can be compared before and after resources to update resources. Such as token management and verification


This is similar to the reference count in GC:

  • Use() -> ref++
  • Clean() -> ref--; if ref == 0 -> ref clean
func (r *RefResource) Use() error {
  // Mutually exclusive access
	defer r.lock.Unlock()
	// Clear mark
	if r.cleaned {
		return ErrUseOfCleaned
	// Reference + 1
	return nil


One sentence Description: using SharedCalls can make multiple requests only need to make one call to get the result at the same time, and other requests "sit back and enjoy the fruits". This design effectively reduces the concurrency pressure of resource services and can effectively prevent cache breakdown.

This component is repeatedly applied to other components, the above-mentioned resource manager.

Similarly, when high-frequency concurrent access to a resource is required, SharedCalls cache can be used.

// When multiple requests request resources using the Do method at the same time
func (g *sharedGroup) Do(key string, fn func() (interface{}, error)) (interface{}, error) {
  // Apply for locking first

  // According to the key, obtain the corresponding call result and save it with variable c
  if c, ok := g.calls[key]; ok {
    // After you get the call, release the lock. The call here may not have actual data, but an empty memory space
    // Call WG Wait to judge whether other goroutines are applying for resources. If blocked, it indicates that other goroutines are obtaining resources
    // When WG Wait is no longer blocked, indicating that the resource acquisition has ended and the result can be returned directly
    return c.val, c.err

  // If you don't get the result, call the makecall method to get the resource. Note that this is still locked, which can ensure that only one goroutine can call makecall
  c := g.makeCall(key, fn)
  // Return call result
  return c.val, c.err


It has always been one of the design tenets of go zero not to make wheels repeatedly; At the same time, it also precipitates the usual business into components, which is the meaning of framework and components.

For more articles on the design and implementation of go zero, you can continue to pay attention to us. Welcome to pay attention and use.

Project address

Welcome to go zero and star support us!

Wechat communication group

Focus on the official account of micro service practice and return to the community to get the two-dimensional code of community groups.

Topics: Go atomic