netty series: a website speed optimization scheme worth hundreds of millions

Posted by lanjoky on Thu, 16 Dec 2021 10:33:22 +0100

brief introduction

In fact, the most profitable thing in the software industry is not to write code. Those who write code can only be called Malone. Those who are more advanced are called programmers. They are all coolies. So is there a big career? There must be. Their name is consultant.

Consultants are to help enterprises make plans, architectures and optimizations. Sometimes a simple code change or architecture adjustment can make software or processes run more efficiently, thus saving enterprises hundreds of millions of expenses.

Today, in addition to introducing how to support both http and https protocols in netty, we also introduce a website data optimization scheme worth hundreds of millions. With this scheme, the annual salary of millions is not a dream!

The goal of this paper

This article will introduce how to support both HTTP and http2 protocols in a netty service. The two servers provide access support for multiple pictures. We will introduce how to return multiple pictures from the server. Finally, a speed optimization scheme worth hundreds of millions is introduced, which is sure to benefit everyone.

Support multiple picture services

For the server side, the service is started through ServerBootstrap. ServerBootstrap has a group method to specify the group of acceptor and the group of client.

    public ServerBootstrap group(EventLoopGroup group) 
    public ServerBootstrap group(EventLoopGroup parentGroup, EventLoopGroup childGroup) 

Of course, you can specify two different groups or the same group. It provides two group methods without much difference in effect.

Here, we now create an EventLoopGroup in the master server, and then pass it into ImageHttp1Server and ImageHttp2Server. Then we call the group method in the two server, and then configure handler.

Let's take a look at the structure of ImageHttp1Server:

        ServerBootstrap b = new ServerBootstrap();
        b.option(ChannelOption.SO_BACKLOG, 1024);
        b.group(group).channel(NioServerSocketChannel.class).handler(new LoggingHandler(LogLevel.INFO))
        .childHandler(new ChannelInitializer<SocketChannel>() {
            @Override
            protected void initChannel(SocketChannel ch){
                ch.pipeline().addLast(new HttpRequestDecoder(),
                                      new HttpResponseEncoder(),
                                      new HttpObjectAggregator(MAX_CONTENT_LENGTH),
                                      new Http1RequestHandler());
            }
        });

We passed in the HttpRequestDecoder, HttpResponseEncoder and HttpObjectAggregator provided by netty. There is also a custom Http1RequestHandler.

Take another look at the structure of ImageHttp2Server:

ServerBootstrap b = new ServerBootstrap();
        b.option(ChannelOption.SO_BACKLOG, 1024);
        b.group(group).channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer<SocketChannel>() {
            @Override
            protected void initChannel(SocketChannel ch)  {
                ch.pipeline().addLast(sslCtx.newHandler(ch.alloc()), new CustProtocolNegotiationHandler());
            }
        });

For simplicity, we use the http1 service by default if we access from HTTP, and the http2 service if we access from https.

Therefore, in the http2 service, we only need to customize the protocol negotiation handler instead of handling the request for clear text upgrade.

http2 processor

In TLS environment, we customize CustProtocolNegotiationHandler and inherit from ApplicationProtocolNegotiationHandler to interact with client-side and server-side protocols.

For the http2 protocol, the InboundHttp2ToHttpAdapterBuilder and HttpToHttp2ConnectionHandlerBuilder provided by netty are used to convert the http2 frame into the FullHttpRequest object of http1. In this way, we can directly process the message in http1 format.

The conversion process is as follows:

DefaultHttp2Connection connection = new DefaultHttp2Connection(true);
        InboundHttp2ToHttpAdapter listener = new InboundHttp2ToHttpAdapterBuilder(connection)
                .propagateSettings(true).validateHttpHeaders(false)
                .maxContentLength(MAX_CONTENT_LENGTH).build();

        ctx.pipeline().addLast(new HttpToHttp2ConnectionHandlerBuilder()
                .frameListener(listener)
                .connection(connection).build());

        ctx.pipeline().addLast(new Http2RequestHandler());

The only difference between the converted http2 handler and the ordinary http1 handler is that an additional streamId attribute needs to be set in the request header and response header.

And there is no need to deal with 100 continue and KeepAlive unique to http1. The others are no different from http1 handler.

Process pages and images

Because we use the converter to convert the frame of http2 into an ordinary object of http1, it is not much different from the processing of http1 in requesting the corresponding page and image.

For the page, we need to get the html to be returned, and then set content_ If the type is "text/html; charset=UTF-8", you can return:

    private void handlePage(ChannelHandlerContext ctx, String streamId,  FullHttpRequest request) throws IOException {
        ByteBuf content =ImagePage.getContent();
        FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, content);
        response.headers().set(CONTENT_TYPE, "text/html; charset=UTF-8");
        sendResponse(ctx, streamId, response, request);
    }

For images, we get the image to be returned, convert it into ByteBuf, and then set content_ If the type is "image/jpeg", return to:

    private void handleImage(String id, ChannelHandlerContext ctx, String streamId,
            FullHttpRequest request) {
        ByteBuf image = ImagePage.getImage(parseInt(id));
        FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, image);
        response.headers().set(CONTENT_TYPE, "image/jpeg");
        sendResponse(ctx, streamId, response, request);
    }

In this way, we can handle both page requests and image requests on the netty server side.

Speed optimization scheme worth hundreds of millions

Finally, the most wonderful part of this article is coming. What is the speed optimization scheme worth hundreds of millions?

Before we talk about this plan, let's tell you a story about flood fighting and rescue. Two counties live next to a big river. This river is very unstable and often floods, but the practices of the county heads of the two counties are very different.

The head of county A is serious and responsible. He sends people to patrol and inspect the river section regularly, building embankments, planting trees and patrolling. He is relaxed for A moment. During his term of office, he was safe and there was no flood break.

The county magistrate of county B never inspected. When a river overflowed, county magistrate B organized people to fight the flood and rescue. Then the media all reported the great achievements of county magistrate B in fighting the flood. Finally, county magistrate B was promoted to mayor because of his outstanding political achievements.

Well, the story is over. Next is our optimization. Whether the user requests a page or a picture, he or she needs to call CTX Writeandflush (response) method to write back the response.

If it is put into a scheduled task, it can be executed regularly, as shown below:

ctx.executor().schedule(new Runnable() {
            @Override
            public void run() {
                ctx.writeAndFlush(response);
            }
        }, latency, TimeUnit.MILLISECONDS);

Then the server will send the corresponding response after the milliseconds specified by latency. For example, here we set the latency value to 5 seconds.

Of course, 5 seconds is not satisfactory, so leaders or customers find you and ask you to optimize it. You said that this performance problem is very difficult. It involves Maxwell's equations and the third law of thermodynamics. It takes a month. The leader said yes, roll up your sleeves and work hard. Your salary will be increased by 50% next month.

A month later, you changed latency to 2.5 seconds, and the performance was improved by 100%. Is this optimization worth hundreds of millions?

summary

Of course, I made a joke for you in the last section, but you should also firmly grasp the skills of using timed tasks in netty response. You know the reason!

Examples of this article can be referred to: learn-netty4

This article has been included in http://www.flydean.com/34-netty-multiple-server/

The most popular interpretation, the most profound dry goods, the most concise tutorial, and many tips you don't know are waiting for you to find!

Welcome to my official account: "those things in procedure", understand technology, know you better!

Topics: Java Netty