react source code analysis 16 Concurrent mode
Video Explanation (efficient learning): Enter learning
concurrent mode
react17 supports concurrent mode. The fundamental purpose of this mode is to enable applications to maintain rapid response of cpu and io. It is a set of new functions, including Fiber, Scheduler and Lane. It can adjust the response speed of applications according to user hardware performance and network conditions. The core is to realize asynchronous and interruptible updates. Concurrent mode is also the main iteration direction of react in the future.
- cup: let the time-consuming reconcile process give js execution power to higher priority tasks, such as user input,
- io: rely on suspend
Fiber
Fiber was introduced before. Here, let's look at the meaning of fiber in concurrent mode. The reconcile before react15 is executed synchronously. When there are a large number of components and the amount of calculation in reconcile is large, the page will be stuck, In order to solve this problem, we need a set of asynchronous and interruptible updates to make time-consuming calculations give js execution power to high priority tasks, and then execute these calculations when the browser is free. Therefore, we need a data structure to describe the real dom and updated information. When appropriate, we can interrupt the reconcile process in memory. This data structure is fiber.
Scheduler
The Scheduler is independent of react itself and is equivalent to a separate package. The significance of the Scheduler is that when the amount of calculation of the cup is large, we calculate the time of one frame according to the fps of the device, and execute the cup operation within this time. When the task execution time is faster than one frame, the task execution will be suspended to give the browser time to rearrange and redraw. Continue the task at the appropriate time.
In js, we know that the generator can also pause and continue tasks, but we also need to prioritize tasks, which the generator cannot complete. In the Scheduler, the time slice is realized by using MessageChannel, and then the task priority is arranged with a small top heap to achieve asynchronous and interruptible update.
The Scheduler can use the expiration time to represent the priority.
The higher the priority, the shorter the expiration time, and the closer it is to the current time, that is, it will be executed in a short time.
The lower the priority, the longer the expiration time, and the longer it is from the current time, that is, it will take a long time for it to execute.
lane
Lane uses binary bits to represent the priority of tasks to facilitate priority calculation. Different priorities occupy 'tracks' in different positions, and there is a concept of batch. The lower the priority, the more' tracks'. High priority interrupts low priority, and what priority needs to be given to new tasks are all issues that lane wants to solve.
batchedUpdates
Simply put, multiple updates are triggered simultaneously in a context, and these updates are combined into one update, such as
onClick() { this.setState({ count: this.state.count + 1 }); this.setState({ count: this.state.count + 1 }); }
In the previous react version, if it is separated from the current context, it will not be merged. For example, multiple updates are placed in the setTimeout because the executionContext of multiple setstates in the same context will contain BatchedContext, and the setstates containing BatchedContext will be merged. When the executionContext is equal to NoContext, The tasks in SyncCallbackQueue will be executed synchronously, so multiple setstates in setTimeout will not be merged and will be executed synchronously.
onClick() { setTimeout(() => { this.setState({ count: this.state.count + 1 }); this.setState({ count: this.state.count + 1 }); }); }
export function batchedUpdates<A, R>(fn: A => R, a: A): R { const prevExecutionContext = executionContext; executionContext |= BatchedContext; try { return fn(a); } finally { executionContext = prevExecutionContext; if (executionContext === NoContext) { resetRenderTimer(); //When the executionContext is NoContext, the tasks in SyncCallbackQueue are executed synchronously flushSyncCallbackQueue(); } } }
In Concurrent mode, the above examples will also be merged into one update. The root cause is in the following simplified source code. If you have multiple setState callbacks, you will compare the priorities of these setState callbacks. If the priorities are the same, you will return first and will not carry out the subsequent render phase
function ensureRootIsScheduled(root: FiberRoot, currentTime: number) { const existingCallbackNode = root.callbackNode;//Callback of setState that has been called before //... if (existingCallbackNode !== null) { const existingCallbackPriority = root.callbackPriority; //If the callback priority of the new setState is equal to that of the previous setState, it will enter the logic of batchedUpdate if (existingCallbackPriority === newCallbackPriority) { return; } cancelCallback(existingCallbackNode); } //Starting point of scheduling render phase newCallbackNode = scheduleCallback( schedulerPriorityLevel, performConcurrentWorkOnRoot.bind(null, root), ); //... }
Then why do the setTimeout callbacks have the same priority of setState multiple times in Concurrent mode? Because in the function requestUpdateLane for obtaining lane, only the first setState satisfies currentEventWipLanes === NoLanes, so their currentEventWipLanes parameters are the same, In findUpdateLane, the schedulerLanePriority parameter is the same (the scheduling priority is the same), so the returned lane is the same.
export function requestUpdateLane(fiber: Fiber): Lane { //... if (currentEventWipLanes === NoLanes) {//The first setState satisfies currentEventWipLanes === NoLanes currentEventWipLanes = workInProgressRootIncludedLanes; } //... //In setTimeout, schedulerlanepriority and currenteventwiplanes are the same, so the returned lane is also the same lane = findUpdateLane(schedulerLanePriority, currentEventWipLanes); //... return lane; }
Suspense
Suspend can display pending status when requesting data, and display data when the request is successful. The reason is that the priority of components in suspend is very low, while the priority of off screen fallback components is high. When the components in suspend resolve, the render stage will be rescheduled. This process occurs in the updateSuspenseComponent function, See the video of debugging suspend for details
summary
Fiber provides data level support for concurrent architecture.
Scheduler provides guarantee for concurrent time slice scheduling.
The Lane model provides an updated strategy for concurrent
The upper layer implements batchedUpdates and suspend