Some time ago, a friend asked me about a problem their company encountered. He said that the back-end didn't realize paging function for some reason, so he returned 20000 pieces of data at a time, and let the front-end display them in the user interface with select component. After listening, I immediately understood his confusion. If the 20000 pieces of data are directly rendered into select through hard coding, It's bound to get stuck. Later, he also said that search needs to be supported, which is also implemented by the front end. I immediately became interested. At that time, I came up with the following scheme:
- Lazy load + paging (front end maintains lazy load data distribution and paging)
- Use virtual scrolling Technology (at present, ant d4.0 of react supports long select list of virtual scrolling)
Lazy loading and paging are generally used for long list optimization, similar to the paging function of tables. The specific idea is that users only load the data they can see each time, and then load the data on the next page when scrolling to the bottom
Virtual scrolling technology can also be used to optimize long lists. Its core idea is to render only the number of lists in the visible area at a time. When dynamic additional elements are added after scrolling and the entire scrolling content is supported by the top padding, the implementation idea is very simple
Through the above analysis, we can solve the problem of friends, but as a front-end engineer with pursuit, the author combs it carefully and abstracts a practical problem based on the first scheme:
How to render big data list and support search function?
The author will explore the value of this problem by simulating the implementation scheme of front-end engineers of different levels. I hope to inspire you and learn to think deeply
text
The author will analyze the above problems from the technical perspective of different experienced programmers, and then start our performance
Before starting the code, we should make basic preparations. The author uses nodejs to build a data server to provide basic data requests. The core code is as follows:
app.use(async (ctx, next) => { if(ctx.url === '/api/getMock') { let list = [] // Generate a random string of a specified number function genrateRandomWords(n) { let words = 'abcdefghijklmnopqrstuvwxyz You're good. You're short of breath, front-end and back-end. But considering the payment, I'm happy to break up. Li Kaifu, the developer, is fengjiang, the senior official, the teacher's morality, the teacher's style, near Jilin Province', len = words.length, ret = '' for(let i=0; i< n; i++) { ret += words[Math.floor(Math.random() * len)] } return ret } // Generate a list of 100000 pieces of data for(let i = 0; i< 100000; i++) { list.push({ name: `xu_0${i}`, title: genrateRandomWords(12), text: `I'm number one ${i}project, come on🌀bar~~`, tid: `xx_${i}` }) } ctx.body = { state: 200, data: list } } await next() })
The above-mentioned author is a basic mock data server implemented by koa, so we can simulate the real back-end environment to start our front-end development (of course, 100000 pieces of data can be generated manually in the front-end). Among them, the generaterandomwords method is used to generate a specified number of strings, which is widely used in mock data technology, Interested basin friends can learn about it. The next front-end code is implemented by react (vue is the same)
Junior engineer's proposal
The hard coding scheme of directly requesting data from the back end and rendering to the page is as follows:
The code might look like this:
- Request backend data:
fetch(`${SERVER_URL}/api/getMock`).then(res => res.json()).then(res => { if(res.state) { data = res.data setList(data) } })
- Render Page
{ list.map((item, i) => { return <div className={styles.item} key={item.tid}> <div className={styles.tit}>{item.title} <span className={styles.label}>{item.name}</span></div> <div>{item.text}</div> </div> }) }
- Search data
const handleSearch = (v) => { let searchData = data.filter((item, i) => { return item.title.indexOf(v) > -1 }) setList(searchData) }
In essence, this method can achieve the basic requirements, but it has obvious disadvantages, that is, the data is rendered to the page at one time, and the large amount of data will cause the page performance to be greatly reduced, resulting in the page jam
Intermediate engineer's plan
As a front-end development engineer with some experience, he must have some knowledge of page performance, so he must be familiar with the anti shake function and throttling function, and have used schemes such as lazy loading and paging. Next, let's look at the scheme of middle-level Engineer:
Through the optimization of this process, the code is basically available. The following describes the specific implementation scheme:
- Lazy load + paging scheme
Lazy loading is realized by monitoring the scrolling of the window. When a occupying element is visible, load the next data. The principle is as follows:
Here, we can get the distance between the poll element and the visible window by listening to the scroll event of window and using getBoundingClientRect for the poll element, so as to implement a lazy loading scheme ourselves
In the rolling process summary, we also need to pay attention to the fact that when users roll back, they do not need to do any processing, so we need to add a one-way lock. The specific code is as follows:
function scrollAndLoading() { if(window.scrollY > prevY) { // Determine whether the user scrolls down prevY = window.scrollY if(poll.current.getBoundingClientRect().top <= window.innerHeight) { // Request next page data } } } useEffect(() => { // something code const getData = debounce(scrollAndLoading, 300) window.addEventListener('scroll', getData, false) return () => { window.removeEventListener('scroll', getData, false) } }, [])
prevY stores the last scrolling distance of the window, and updates its value only when scrolling down and the scrolling height is greater than the last time
As for the paging logic, it is also very simple to implement paging in native javascript. We define several dimensions:
- curPage current number of pages
- pageSize number of displays per page
- Data the amount of data passed in
With these conditions, our basic paging function can be completed. The core code of front-end paging is as follows:
let data = []; let curPage = 1; let pageSize = 16; let prevY = 0; // other code... function scrollAndLoading() { if(window.scrollY > prevY) { // Determine whether the user scrolls down prevY = window.scrollY if(poll.current.getBoundingClientRect().top <= window.innerHeight) { curPage++ setList(searchData.slice(0, pageSize * curPage)) } } }
- Implementation of anti shake function
Because the anti shake function is relatively simple, here is a simple anti shake function code:
function debounce(fn, time) { return function(args) { let that = this clearTimeout(fn.tid) fn.tid = setTimeout(() => { fn.call(that, args) }, time); } }
- Search implementation
Search function code is as follows:
const handleSearch = (v) => { curPage = 1; prevY = 0; searchData = data.filter((item, i) => { // Regular matching is adopted, and fuzzy search is supported in the later stage let reg = new RegExp(v, 'gi') return reg.test(item.title) }) setList(searchData.slice(0, pageSize * curPage)) }
It needs to be implemented in combination with paging, so in order not to affect the source data, we use temporary data searchData to store
The effect is as follows:
After search:
Both before and after the search, lazy loading is used, so there is no need to worry about the performance bottleneck caused by large amount of data~
Senior engineer's proposal
As a long-standing battlefield programmer, we should consider more elegant implementation methods, such as componentization, algorithm optimization, multithreading, such as big data rendering in our problem, we can also use virtual long list to solve our needs more elegantly and concisely. As for the implementation of virtual long list, I have already pointed out at the beginning, and I will not introduce it in detail here, For a larger amount of data, such as 1 million (although we will not encounter such a brainless scenario in actual development), what should we do?
The first point is that we can use js buffer to slice and process 1 million pieces of data. The idea code is as follows:
function multistep(steps,args,callback){ var tasks = steps.concat(); setTimeout(function(){ var task = tasks.shift(); task.apply(null, args || []); //The call to Apply parameter must be an array if(tasks.length > 0){ setTimeout(arguments.callee, 25); }else{ callback(); } },25); }
In this way, we can compare the js process blocking caused by a large number of calculations. For more performance optimization schemes, please refer to the previous articles of the author
We can also use web worker to move in the logic that needs a lot of calculation in the front end, to ensure the quick response of js main process, to let web worker thread calculate in the background, and then inform the main process through the communication mechanism of web worker after the calculation is completed, such as fuzzy search, etc. we can also further optimize the search algorithm, such as dichotomy, etc, So these are all the problems that senior engineers should consider. But we must distinguish the scene and find out a more cost-effective solution
last
If you want to learn more front-end skills, practice and learning routes, you are welcome to learn and discuss in the column "interesting front end" to explore the front-end boundary together.