The way of meituan's commodity platform

Posted by gabo on Thu, 06 Feb 2020 05:19:11 +0100

From the beginning of 2015 to the end of 2018, we have made a commodity system in meituan reviews for four years. A little summary.

2019/1/3

 

Business fast running, platform jogging

1. Fast business

At the beginning of 2015, pan commodity system was started to explore ktv booking, and a set of general point commodity system was written behind it. I remember that the first version of the commodity system went online, and I was very happy to do so, but the boss came to help me to resume the offer, "why is this project delayed so long?". Well, two weeks later!

For the first time, I was so surprised, so intoxicated, but got such feedback. It was very profound to be splashed with cold water. At that time, the background was that the original comment would upgrade ktv group purchase to ktv booking and mvp, a typical business fast-moving methodology. Two weeks early delivery can get online test data two weeks early.

 

What concept did I use to develop it? Think of it as a child and give it what I think is best. Simple and easy to combine api, full stack batch processing, three sets of independent commodity domain objects, separation of production / online table and service, ddd removal, the most concise code, etc. From a technical point of view, to achieve more system capabilities to support the ktv booking business, it is pure to want to write well. After reflection, we need to master trade off for business support and try our best to make the business run as fast as possible at the beginning of the business. Products and systems should work with the idea of iteration.

2. Platform jogging

Is business really fast? In the first two years, I worked as a commodity system in the business team, and in the second two years, I worked as a commodity system in the platform team. A new cognition is obtained: the platform system should be jogged and cold started as much as possible. It's not right to just follow the business.

 

Platform Fist

The essence of whether to invest in platform infrastructure or new business opportunities is to see which ROI is more valuable. Its judgment is strongly related to industry market, company size, strategy, organization and kp.

 

Opportunities and opportunities for rapid growth of the industry are very important. The platform infrastructure layer does less and the business upper layer does more. If the logic of its new business is not right, the platform should invest more resources to make thick and compound profits. The most important thing for a start-up company to survive is not only to run fast, but also to run hard.

You can share Ali's opportunity to become a sharing business unit. At that time, Taobao Mall failed. The technology leader thought that the commodity system of the mall was more suspended, which should be the basis for Alibaba's business. He went to talk to Lao Lu, the CEO of the mall, who sold but understood and used the commodity system of the mall. One business failed and the other was very mature, but the decision logic of the technical system was not absolute with the business.

 

Most of the company's business volume determines the organization's voice, whether it is resources such as products, technology, sales, or the tools behind them. The decision logic of the platform system is not which team has many people, which business volume is large, which technical boss has high rank. The decision logic of platform system should start from the essence of technology, and whether its core system capability and advancement will better support the company's future. Alibaba has two bus and hundreds of people. In order to develop its business in multiple ways, the system has chosen to be platform based and has broken the organizational wall of technology.

Today, Taobao's CTO is responsible for the technology of big retail, cloud, Zhongtai, ant, etc. All technologies should be in the charge of one leader, and have independent decision-making power equivalent to product status. Otherwise, high-tech enterprises are written on paper. Technology is essentially driving business.

 

“Platform First”! Platform cannot be led by business. It is the basic judgment that the platform system should follow the planned economy, which are the core capabilities and which are of great value.

 

Platform Slow Enough

platform first is direction decision, and platform slow enough is development rhythm. Platform R & D should also be iterative thinking. Business support cannot cover all requirements. "Slow" mainly refers to two aspects:

 

1, The capacity of the platform should follow the planned economy as much as possible. It's impossible to support 100% of the business in place at one time. At the beginning, it may not even reach 20%. It's impossible to support "business fast running" all the time. In Alibaba business, the kpi of customer satisfaction in Taiwan accounts for 15% of the total. In the early stage, we will be scolded for the front desk development and products. It is not a good platform without courage and integrity. The first task of the platform is to practice basic skills, strengthen design and code. At the beginning, it is best to start the platform cold and reduce customer expectations as much as possible. The more biased the foreground, the more backward the function priority can be.

 

2, The speed of platform construction should not only be fast, but also require quality. Abstract level, extensible ability, stability and other core capabilities should develop synchronously. Scalability determines the future growth limit of the platform, and system abstraction determines the current capacity limit. Stability is the most critical factor in the system quality, and platform buffeting has a great influence.

 

Critical thinking, cognitive business

The most profound thing about doing business and feeling is: how profound the business understanding is and how powerful the platform capability is. Give an example of domain decoupling. The first comment group purchase commodity system is coupled with the settlement information (commission rate, settlement method), activity information (start time, end time), and front-end control information. The production and release of commodities are coupled with the audit process.

 

When making commodity system platform, the most frequently asked question is: is commodity really related to these? Is that ok? Strong relation or weak relation?

 

We finally got rid of the coupling of goods and solutions. The settlement information is removed from the commodity field, and the settlement method is implemented in the customer dimension. The settlement Commission provides more dimension calculation, supports category and scheme dimensions, and also supports the customer whitelist and commodity dimension Commission. These are technology led, driven products, from scratch in the work.

 

Let's take another example of reseller. A group order was first called a scheme (the scheme is the signing information of opening cooperation with merchants), and a group purchase order was called a supply chain, mainly for sales promotion scenarios. From 2013 to 2015, meituan and Dianping fight for group buying. Both sides rely on sales to open new cities, sign exclusive, shop goods, record a single order, reduce Commission for key customers, and even provide a minimum deposit. Many functions of group buying are built for the sale of CRM, which requires fast efficiency and speed. The positioning of group buying products is preferential package.

 

After 2016, with the change of group buying market, the cost of recording group buying by a single sales order has become very large. The pan commodity system started in 2015 was oriented to the merchant side at the beginning. The merchant side is characterized by light weight because of the simple operation requirements of the merchant. Later, the sales side was added, and most of the back-end interfaces, functions and pages were reused. The logic of sales is consistent with that of the merchant, because we understand that sales is more of an auxiliary role. We no longer call commodities as solutions and commodity management as commodity supply chain. The essence of O2O commodity is: structured CMS + flexible sales form. CMS is to increase the extension ability of metadata description and storage. Flexible sales form is to strengthen the price, inventory and sales rules.

 

b/c structure of commodities. Both reviews and meituan like the b/c architecture. The b-end is called the commodity supply chain, and it will be very different from the c-end (data and logic). The b/c scenario splits the commodity foundation layer, just like a complete person is split into the upper half and the lower half. Such an approach is more of a solidification of thinking, which was done fast before and just like this; in the past and now, "existence is reasonable" and lack of platform-based constructive thinking. The boundary of commodity base layer includes closed-loop scenarios of commodity production, release and sale. The data can be one, two or even three copies, but they are all closed-loop in commodity base system. Commodity basic system is b/c-free, it should gather commodity capacity, from this basic point, commodity data circulation and management cost is the minimum.

 

The most important experience here is: think critically about business, think more and ask yourself why. The market has changed and the cognition has to change. We can't follow the old system thinking. Think without borders.

 

Platform minimum perceptual service

As a platform, the first principle to be considered in the selection of technical solutions is LNP (Least Knowledge Principle). The popular point is that the platform side does not perceive the access side as much as possible. Take some common scenes.

Scenario of function development. If the function can be standardized, it can be abstracted into multiple general packages and provide the ability of default packages. Take the default scheme, so that neither the platform side nor the business side has the development volume, nor even the configuration. The highest level of platform development is "the sound of chickens and dogs is heard, but the old and the dead do not communicate". This requires a deep understanding of the business and the ability to abstract out general packages.

 

"Everyone lives in a world of abundance, peace, tranquility, joy and contentment. Communication or non communication, or non communication, has no impact on their lives. Everyone lives in the moment, Enjoy that moment, listen to the chicken barking and dog barking outside the window, the white clouds floating on the top of the head, and the wind blowing around, lest any unexpected visitors break this beautiful moment. "

 

The scene of data interaction. As far as possible, the platform side should set standards and establish a data layer for isolation, and the business side should drive & control; so that the platform side does not rely on any access party, and the dependence is clean. If we want to perceive, we should try not to use the way of spi(rpc). spi(rpc) will make the dependency of the access party very complex. The way of extension point (jar) is disgusting, but it is better than spi(rpc). Another is the combination of plug-in and framework, which is deployed as an instance, but it has a great challenge to resource cost and technical middleware. Of course, it's another thing to rely on the platform side. Such dependence is stable.

 

The scene of logical judgment. The commodity system has many conditional branches of logical judgment. Originally, we relied on very detailed categories. Now, as long as we can rely on the product type with larger granularity, we rely on the product type (product type such as group purchase, category such as beauty / nail). If you can make a blacklist, don't make a whitelist. In most scenarios, blacklist is used to shield special logic, while in a few scenarios, whitelist is used to control the entry and issue privileges. In a word, if you can write big logic, don't write small logic; if you can do rough logic, don't do fine logic.

 

We should try our best to do LNP. To be aware, configuration is also a very expensive cost.

 

To be a universal glue, we need to have an overall view

 

What is a global view? I do commodities, but I need to know the upstream and downstream dependence of commodities. I can not only connect with CRM, audit, marketing, search, advertising and other data and functions, but also jump up at least one step. The ideal state is that the ability to expand new business and marketing standards is available by default, search ability is available, users can search keywords to locate products, guess you like to recommend more products that meet users' preferences.....

 

Step up, the platform side should at least stand on the same channel with other partners to understand the business value. At the earliest, we made platforms, thinking that it would be enough to connect with other platforms, and the docking between us was standardized. We have fewer and standardized tasks, but we still need to schedule and develop other platforms. Maybe we only need half a day's work, and the process will be gone in a month or two. If the platform side wants to start with the end and have common values and global outlook, it is easy to come up with a global optimal solution. I'm fast and you're slow. Both sides are extremely poor, not average.

 

Jump up two steps, you can stand in the user's perspective to understand the business value. For example, when I do commodity and access marketing, I have to think about what industries and product forms businesses will have appeals and what appeals they will have. If you have figured out how to connect with the marketing platform, it will be easier for you to choose a more appropriate plan.

 

What is universal glue? Universal glue reminds people of python and script language. The essence of glue is to connect and recreate. To be honest, there is always an active party to connect one platform with another. For example, the online commodity trading system is more suitable to take commodities as the center, drive pre-sales scenarios, and let data flow in all scenarios of online marketing. At this time, the commodity center has to take on the responsibility of universal glue. It needs to customize a large number of adaptation interfaces for marketing, and it needs to customize a large number of adaptation interfaces for search. Large commodity trading system, with commodity system and trading system as two axes, drives the core and peripheral systems.

Universal glue will have a great development and maintenance cost, which is dirty and tiring. Once it is done well, a large area of platform capacity will have a center to drive, and the efficiency can be improved several times and ten times. To see the overall situation of universal glue, we must find one or two centers and take on the responsibility of connecting other centers.

 

Methodology, instrumentalization and vitality

If you believe in it, think about it and use it, you will have experience and experience. In the past four years, I have only remembered the "SOLID principle" of software architecture methodology. I am against DDD;

More simply, "high cohesion, low coupling"; more simply, "boundary".

The first point here is about methodology because I hope you will have less methodology and more practice. There is no conflict between theory and practice. Technology has many hot concepts. We should try our best to understand the essence. It doesn't matter if you can't understand. Practice step by step. How to practice? There are only a few methodologies, such as SOLID and GRASP, or "simplicity" and "boundary", which can be used for a lifetime. The key is to use.

 

Instrumentalism. There are a lot of tools on the Internet, such as metadata, configuration, component and splicing, modularization, service, rule engine, process engine, UI tools, front-end templates, etc. I don't want to talk about the specific details, but what is the essence of the tool? The essence is to improve efficiency and productivity. If you can make code write, don't let people write. If you can replace people with tools, they will be replaced by tools. This is also one of UNIX philosophy.

 

Vitality. The last point I want to talk about is the vitality of the platform system. People, animals and plants all have life, but machines and codes have no vitality. In "thinking about software design for uncertainty", Xuannan, vice president of Alibaba business middle office, mentioned that Alibaba's platform system has completed the toolization stage, and now it needs to go to kernel thinking. What is the essence?

 

E-commerce has many platforms, such as user center, commodity center, marketing center, transaction center, financial center, etc. With these centers, the front-end access side is still very painful. It seems that the functions are complete, but it is very complex to use. Just like Xiaobai went to Huaqiangbei to assemble a computer, what is the cpu and what is the host; the complexity of the access platform may be higher than that of the access side. How can so many centers work together? So it puts forward the kernel thinking.

 

Let me interpret kernel thinking. I first contacted the operating system and thought that POSIX standard interface is the most important part of the kernel, and its essence is the standard communication interface with the user state space, but in recent years, this is just a layer of skin you can see. Standard communication interface can not solve the essence of operating system capability.

 

There are many philosophies in UNIX, which are summed up as follows: the whole is greater than the sum of parts (Aristotle). UNIX is micro kernel and LINUX is Monolithic kernel. However, the reason why LINUX adopts Monolithic is that "the whole is greater than the sum of parts". From this point of view, they have some common talents. If these platforms are isolated into plug-ins, they will lose the ability of unified command, in other words, they will lose their vitality.

 

Why is LINUX, the late rising LINUX, unifying the operating system market, rather than the UNIX, which was designed to be very modular, plug-in and microservice at the beginning? It's worth pondering. Now microservice is hot and impetuous. We need to really figure out what the system is going to provide.

 

What is more powerful than standardization is that these platforms form a unified life. For example, is there a public environment to order a new product that all platforms can immediately perceive? Aware that these platforms can automatically initiate decisions, communicate with each other and create new products? At present, the understanding of product concepts, such as scheme, commodity, transaction and settlement, as well as variable names and values, are all inconsistent.

 

This requires tight coupling, higher level abstraction and unified command of these platforms, but not necessarily a unified commander. Only when the whole is greater than the sum of the parts, can the system surpass the atom and the reductionism, and make the platform system live and full of vitality. We should still go deep into the abstract concept details of each subsystem, focusing on the strong connection between these systems, and whether they are unified and tightly coupled.

 

Finally, I quote a paragraph to discuss the philosophical differences between the two cores:

The UNIX Programming Environment

Even though the UNIX system introduces a number of innovative programs and techniques, no single program or idea makes it work well. Instead, what makes it effective is the approach to programming, a philosophy of using the computer. Although that philosophy can't be written down in a single sentence, at its heart is the idea that the power of a system comes more from the relationships among programs than from the programs themselves. Many UNIX programs do quite trivial things in isolation, but, combined with other programs, become general and useful tools.

Program Design in the UNIX Environment

Much of the power of the UNIX operating system comes from a style of program design that makes programs easy to use and, more important, easy to combine with other programs. This style has been called the use of software tools, and depends more on how the programs fit into the programming environment and how they can be used with other programs than on how they are designed internally. [...] This style was based on the use of tools: using programs separately or in combination to get a job done, rather than doing it by hand, by monolithic self-sufficient subsystems, or by special-purpose, one-time programs.

 

Linus talk about Microkernels

https://www.oreilly.com/openbook/opensources/book/linus.html
When I began to write the Linux kernel, there was an accepted school of thought about how to write a portable system. The conventional wisdom was that you had to use a microkernel-style architecture.

With a monolithic kernel such as the Linux kernel, memory is divided into user space and kernel space. Kernel space is where the actual kernel code is loaded, and where memory is allocated for kernel-level operations. Kernel operations include scheduling, process management, signaling, device I/O, paging, and swapping: the core operations that other programs rely on to be taken care of. Because the kernel code includes low-level interaction with the hardware, monolithic kernels appear to be specific to a particular architecture.

A microkernel performs a much smaller set of operations, and in more limited form: interprocess communication, limited process management and scheduling, and some low-level I/O. Microkernels appear to be less hardware-specific because many of the system specifics are pushed into user space. A microkernel architecture is basically a way of abstracting the details of process control, memory allocation, and resource allocation so that a port to another chipset would require minimal changes.

So at the time I started work on Linux in 1991, people assumed portability would come from a microkernel approach. You see, this was sort of the research darling at the time for computer scientists. However, I am a pragmatic person, and at the time I felt that microkernels (a) were experimental, (b) were obviously more complex than monolithic Kernels, and (c) executed notably slower than monolithic kernels. Speed matters a lot in a real-world operating system, and so a lot of the research dollars at the time were spent on examining optimization for microkernels to make it so they could run as fast as a normal kernel. The funny thing is if you actually read those papers, you find that, while the researchers were applying their optimizational tricks on a microkernel, in fact those same tricks could just as easily be applied to traditional kernels to accelerate their execution.

In fact, this made me think that the microkernel approach was essentially a dishonest approach aimed at receiving more dollars for research. I don't necessarily think these researchers were knowingly dishonest. Perhaps they were simply stupid. Or deluded. I mean this in a very real sense. The dishonesty comes from the intense pressure in the research community at that time to pursue the microkernel topic. In a computer science research lab, you were studying microkernels or you weren't studying kernels at all. So everyone was pressured into this dishonesty, even the people designing Windows NT. While the NT team knew the final result wouldn't approach a microkernel, they knew they had to pay lip service to the idea.

Fortunately I never felt much pressure to pursue microkernels. The University of Helsinki had been doing operating system research from the late 60s on, and people there didn't see the operating system kernel as much of a research topic anymore. In a way they were right: the basics of operating systems, and by extension the Linux kernel, were well understood by the early 70s; anything after that has been to some degree an exercise in self-gratification.

If you want code to be portable, you shouldn't necessarily create an abstraction layer to achieve portability. Instead you should just program intelligently. Essentially, trying to make microkernels portable is a waste of time. It's like building an exceptionally fast car and putting square tires on it. The idea of abstracting away the one thing that must be blindingly fast--the kernel--is inherently counter-productive.

Of course there's a bit more to microkernel research than that. But a big part of the problem is a difference in goals. The aim of much of the microkernel research was to design for a theoretical ideal, to come up with a design that would be as portable as possible across any conceivable architecture. With Linux I didn't have to aim for such a lofty goal. I was interested in portability between real world systems, not theoretical systems.

 

Published 98 original articles, won praise 8, visited 120000+
Private letter follow

Topics: Unix Linux less Programming