Implementation of a protocol pool in fasthttp
The protocol pool can control parallelism and reuse the protocol.An important reason why fasthttp is many times more efficient than net/http is the use of a coprocess pool.The implementation is not complicated, we can refer to his design to write high-performance applications.
Entrance
// server.go func (s *Server) Serve(ln net.Listener) error { var lastOverflowErrorTime time.Time var lastPerIPErrorTime time.Time var c net.Conn var err error maxWorkersCount := s.getConcurrency() s.concurrencyCh = make(chan struct{}, maxWorkersCount) wp := &workerPool{ WorkerFunc: s.serveConn, MaxWorkersCount: maxWorkersCount, LogAllErrors: s.LogAllErrors, Logger: s.logger(), } // break-01 wp.Start() for { // break-02 if c, err = acceptConn(s, ln, &lastPerIPErrorTime); err != nil { wp.Stop() if err == io.EOF { return nil } return err } // break-03 if !wp.Serve(c) { s.writeFastError(c, StatusServiceUnavailable, "The connection cannot be served because Server.Concurrency limit exceeded") c.Close() if time.Since(lastOverflowErrorTime) > time.Minute { s.logger().Printf("The incoming connection cannot be served, because %d concurrent connections are served. "+ "Try increasing Server.Concurrency", maxWorkersCount) lastOverflowErrorTime = CoarseTimeNow() } // The current server reached concurrency limit, // so give other concurrently running servers a chance // accepting incoming connections on the same address. // // There is a hope other servers didn't reach their // concurrency limits yet :) time.Sleep(100 * time.Millisecond) } c = nil } } // It is necessary to understand the structure of workerPool type workerPool struct { // Function for serving server connections. // It must leave c unclosed. WorkerFunc func(c net.Conn) error MaxWorkersCount int LogAllErrors bool MaxIdleWorkerDuration time.Duration Logger Logger lock sync.Mutex workersCount int mustStop bool ready []*workerChan stopCh chan struct{} workerChanPool sync.Pool }
goroutine status:
main0: wp.Start()
break-01
// workerpool.go // Start a goroutine and clean up []*workerChan every once in a while; // The operation of wp.clean() is to look at the most recently used workerChan and clean it up if its most recent use interval is greater than a certain value. func (wp *workerPool) Start() { if wp.stopCh != nil { panic("BUG: workerPool already started") } wp.stopCh = make(chan struct{}) stopCh := wp.stopCh go func() { var scratch []*workerChan for { wp.clean(&scratch) select { case <-stopCh: return default: time.Sleep(wp.getMaxIdleWorkerDuration()) } } }() }
goroutine status:
main0: wp.Start()
g1: for loop to clean idle workerChan
break-02
AcceptConn (s, ln, &lastPerIPErrorTime) mainly processes ln.Accept(), determines whether err is Temporary, and returns a net.Conn
break-03
// workerpool.go func (wp *workerPool) Serve(c net.Conn) bool { // break-04 ch := wp.getCh() if ch == nil { return false } ch.ch <- c return true } type workerChan struct { lastUseTime time.Time ch chan net.Conn }
wp.getCh() returns a *workerChan, and you can see that workerChan has a ch attribute, with the net.Conn parameter passed in directly into it.
break-04
// workerpool.go func (wp *workerPool) getCh() *workerChan { var ch *workerChan createWorker := false wp.lock.Lock() ready := wp.ready n := len(ready) - 1 if n < 0 { // ready is empty and total is less than MaxWorkersCount, then a new workerChan needs to be created if wp.workersCount < wp.MaxWorkersCount { createWorker = true wp.workersCount++ } } else { // Remove a workerChan from the end of the queue ch = ready[n] ready[n] = nil wp.ready = ready[:n] } wp.lock.Unlock() if ch == nil { if !createWorker { return nil } // Go into the creation process and remove workerChan from Pool vch := wp.workerChanPool.Get() if vch == nil { vch = &workerChan{ ch: make(chan net.Conn, workerChanCap), } } ch = vch.(*workerChan) // Create a goroutine to process the request and receive a chan *workerChan as a parameter go func() { // break-05 wp.workerFunc(ch) wp.workerChanPool.Put(vch) }() } return ch }
Here we just look at the process of creation.If read is empty, it means read is exhausted and less than MaxWorkersCount, then a new workerChan needs to be created.
When creating, first remove the multiplex from the Pool, if it is nil, then create a new one.
It can be predicted that wp.workerFunc(ch) must contain a for loop here to handle net.Conn in workerChan.
goroutine status:
main0: wp.Start()
g1: for loop to clean idle workerChan
g2: wp.workerFunc(ch) blocks for handling connection
break-05
// workerpool.go func (wp *workerPool) workerFunc(ch *workerChan) { var c net.Conn var err error for c = range ch.ch { if c == nil { break } // Functions that actually handle requests if err = wp.WorkerFunc(c); err != nil && err != errHijacked { errStr := err.Error() if wp.LogAllErrors || !(strings.Contains(errStr, "broken pipe") || strings.Contains(errStr, "reset by peer") || strings.Contains(errStr, "i/o timeout")) { wp.Logger.Printf("error when serving connection %q<->%q: %s", c.LocalAddr(), c.RemoteAddr(), err) } } if err != errHijacked { c.Close() } c = nil // Release workerChan // break-06 if !wp.release(ch) { break } } // Jumping out of the for range loop means that you get a nil from chan, or wp.mustStop is set to true, which is the active stop method. wp.lock.Lock() wp.workersCount-- wp.lock.Unlock() }
for range keeps getting connections from chan net.Conn.Do you remember that in the func (wp *workerPool) Serve(c net.Conn) bool function, an important operation is to put the accept to the connection and into the channel.
Finally, you need to release the current workerChan back into the ready of the workerPool.
break-06
func (wp *workerPool) release(ch *workerChan) bool { ch.lastUseTime = CoarseTimeNow() wp.lock.Lock() if wp.mustStop { wp.lock.Unlock() return false } wp.ready = append(wp.ready, ch) wp.lock.Unlock() return true }
During the release operation, notice that ch.lastUseTime has been modified. Remember the clean operation?Running in the g1 protocol.
So the final running state is:
goroutine status:
main0: wp.Start()
g1: for loop to clean idle workerChan
g2: wp.workerFunc(ch) blocks for handling connection
g3: ....
g4: ....
The number of goroutines increases on demand, but there is also a maximum, so parallelism is controllable.When requests are dense, a worker goroutine may process multiple connection s serially.
wokerChan is reused in Pool and the pressure on GC decreases a lot.
In contrast to native net/http packages, where parallelism is uncontrollable (may be uncertain, runtime will have control?), goroutines cannot be reused, as reflected in a request for a goroutine that is destroyed when used up, putting greater pressure on the machine.