Introduction to Netty - third conversation

Posted by cosminb on Tue, 01 Mar 2022 23:44:36 +0100

1. Articles may be updated in priority GithubPersonal blog . Other platforms will be a little late. Alternate address of personal blog

2, if Github is too laggy, you can Gitee Browse, or Gitee online readingPersonal blog . Gitee online reading and personal blog loading speed is relatively fast.

3. Reprint notice: please indicate the source of GitHub, let's maintain a good technical creation environment together!

4. If you want to submit issue or pr, it is recommended to Github Submit.

5. The author will update one after another. If it helps you, you might as well Github Order a Star ~. Your Star is the driving force of my creation.

The source code part of Shang Silicon Valley will not take notes for the time being, because I think Mr. Han doesn't speak well about the source code, and the barrage and comments also say. For the source code, first digest the introduction, and then look at the source code through books or blogs. You can feel it only if you can use it first and look at the source code.

Google Protobuf

Basic introduction of encoding and decoding

  1. When writing a network application, because the data transmitted in the network is binary bytecode data, it needs to be encoded when sending data and decoded when receiving data [schematic diagram]
  2. There are two components of codec: decoder and encoder. The encoder is responsible for converting business data into bytecode data, and the decoder is responsible for converting bytecode data into business data

Analysis of the encoding and decoding mechanism and problems of Netty

  1. Netty itself provides some codecs
  2. Encoder provided by Netty
    • StringEncoder: encodes string data.
    • ObjectEncoder: encodes Java objects.
  3. Decoder provided by Netty
    • StringDecoder, which decodes string data
    • ObjectDecoder, which decodes Java objects
  4. The ObjectDecoder and ObjectEncoder provided by Netty can be used to encode and decode POJO objects or various business objects. Java serialization technology is still used at the bottom, but the efficiency of Java serialization technology is not high. There are the following problems
    • Cannot cross language
    • The volume after serialization is too large, which is more than 5 times that of binary coding.
    • Serialization performance is too low
  5. Lead to a new solution [Google's Protobuf]

Protobuf

  1. Protobuf basic introduction and use diagram
  2. Protobuffer is an open source project released by Google. Its full name is Google Protocol Buffers. It is a lightweight and efficient structured data storage format that can be used for structured data serialization, or serialization. It is very suitable for data storage or RPC [remote procedure call] data exchange format. At present, many companies turn from http + json to tcp + protobuf, which will be more efficient.
  3. Reference documents: https://developers.google.com/protocol-buffers/docs/proto Language Guide
  4. Protobuf manages data in the form of message
  5. Support cross platform and cross language, i.e. [client and server can be written in different languages] (support most current languages, such as C + +, c#, Java, python, etc.)
  6. High performance and high reliability
  7. The Protobuf compiler can automatically generate code, and Protobuf uses the definition of classes Proto file. Description, written in idea When the proto file is, it will automatically prompt whether to download it Write plugins using ptoto You can highlight the syntax.
  8. Then through protocol Exe compiler according to proto is automatically generated java file
  9. Schematic diagram of protobuf

Protobuf quick start example

Write a program and use Protobuf to complete the following functions

  1. The client can send a StudentPoJo object to the server (encoded by Protobuf)
  2. The server can receive the StudentPoJo object and display the information (decoded by Protobuf)
	    <dependency>
            <groupId>com.google.protobuf</groupId>
            <artifactId>protobuf-java</artifactId>
            <version>3.6.1</version>
        </dependency>

Student.proto

syntax = "proto3"; //edition
option java_outer_classname = "StudentPOJO";//The generated external class name is also the file name
//protobuf uses message to manage data
message Student { //An internal class Student will be generated in the external class of StudentPOJO, which is the real POJO object sent
    int32 id = 1; // In the Student class, there is an attribute whose name is id and type is int32(protobuf type). 1 indicates the attribute sequence number, not the value
    string name = 2;
}

compile
protoc.exe --java_out=.Student.proto
Put the generated student POJO into the project for use

The generated student POJO code is too long to be posted here

NettyServer

package com.atguigu.netty.codec;

import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.protobuf.ProtobufDecoder;

public class NettyServer {
    public static void main(String[] args) throws Exception {


        //Create BossGroup and WorkerGroup
        //explain
        //1. Create two thread groups, bossGroup and workerGroup
        //2. bossGroup only handles connection requests. The real and client business processing will be handed over to the worker group
        //3. Both are infinite loops
        //4. Number of sub threads (NioEventLoop) contained in bossgroup and workerGroup
        //   Default actual cpu cores * 2
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup(); //8



        try {
            //Create a server-side startup object and configure parameters
            ServerBootstrap bootstrap = new ServerBootstrap();

            //Use chain programming to set
            bootstrap.group(bossGroup, workerGroup) //Set up two thread groups
                    .channel(NioServerSocketChannel.class) //NioSocketChannel is used as the channel implementation of the server
                    .option(ChannelOption.SO_BACKLOG, 128) // Set the number of connections obtained from the thread queue
                    .childOption(ChannelOption.SO_KEEPALIVE, true) //Set keep active connection status
//                    . handler(null) / / this handler corresponds to bossgroup and childhandler corresponds to workerGroup
                    .childHandler(new ChannelInitializer<SocketChannel>() {//Create a channel initialization object (anonymous object)
                        //Set processor for pipeline
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {


                            ChannelPipeline pipeline = ch.pipeline();
                            //Add ProtoBufDecoder to pipeline
                            //Specifies which object to decode
                            pipeline.addLast("decoder", new ProtobufDecoder(StudentPOJO.Student.getDefaultInstance()));
                            pipeline.addLast(new NettyServerHandler());
                        }
                    }); // Set the processor for the pipeline corresponding to the EventLoop of our workerGroup

            System.out.println(".....The server is ready...");

            //Bind a port and synchronize to generate a ChannelFuture object
            //Start the server (and bind the port)
            ChannelFuture cf = bootstrap.bind(6668).sync();

            //Register a listener for cf to monitor events we care about

            cf.addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) throws Exception {
                    if (cf.isSuccess()) {
                        System.out.println("Listening port 6668 succeeded");
                    } else {
                        System.out.println("Listening port 6668 failed");
                    }
                }
            });


            //Monitor closed channels
            cf.channel().closeFuture().sync();
        }finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }

    }

}

NettyServerHandler

package com.atguigu.netty.codec;

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.*;
import io.netty.util.CharsetUtil;

/*
explain
1. To customize a Handler, we need to continue with a handleradapter (specification) specified by netty
2. At this time, we can customize a handler to be called a handler
 */
//public class NettyServerHandler extends ChannelInboundHandlerAdapter {
public class NettyServerHandler extends SimpleChannelInboundHandler<StudentPOJO.Student> {


    //Read the actual data (here we can read the messages sent by the client)
    /*
    1. ChannelHandlerContext ctx:Context object, including pipeline, channel and address
    2. Object msg: This is the default Object for the data sent by the client
     */
    @Override
    public void channelRead0(ChannelHandlerContext ctx, StudentPOJO.Student msg) throws Exception {

        //Read the studentpojo.com sent from the client Student


        System.out.println("Data sent by client id=" + msg.getId() + " name=" + msg.getName());
    }

    //Data reading completed
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {

        //writeAndFlush is write + flush
        //Write data to cache and refresh
        //Generally speaking, we encode the data sent
        ctx.writeAndFlush(Unpooled.copiedBuffer("hello, client~(>^ω^<)Meow 1", CharsetUtil.UTF_8));
    }

    //To handle exceptions, it is generally necessary to close the channel

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        ctx.close();
    }
}

NettyClient

package com.atguigu.netty.codec;

import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.protobuf.ProtobufEncoder;

public class NettyClient {
    public static void main(String[] args) throws Exception {

        //The client needs an event loop group
        EventLoopGroup group = new NioEventLoopGroup();


        try {
            //Create client startup object
            //Note that the client uses Bootstrap instead of ServerBootstrap
            Bootstrap bootstrap = new Bootstrap();

            //Set relevant parameters
            bootstrap.group(group) //Set thread group
                    .channel(NioSocketChannel.class) // Set the implementation class of the client channel (reflection)
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ChannelPipeline pipeline = ch.pipeline();
                            //Add ProtoBufEncoder to pipeline
                            pipeline.addLast("encoder", new ProtobufEncoder());
                            pipeline.addLast(new NettyClientHandler()); //Add your own processor
                        }
                    });

            System.out.println("client ok..");

            //Start the client to connect to the server
            //The analysis of ChannelFuture involves the asynchronous model of netty
            ChannelFuture channelFuture = bootstrap.connect("127.0.0.1", 6668).sync();
            //Monitor the closed channel
            channelFuture.channel().closeFuture().sync();
        }finally {

            group.shutdownGracefully();

        }
    }
}

NettyClientHandler

package com.atguigu.netty.codec;

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.util.CharsetUtil;

public class NettyClientHandler extends ChannelInboundHandlerAdapter {

    //This method is triggered when the channel is ready
    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {

        //A Student object to the server occurred

        StudentPOJO.Student student = StudentPOJO.Student.newBuilder().setId(4).setName("Zhiduoxing Wu Yong").build();
        //Teacher , Member ,Message
        ctx.writeAndFlush(student);
    }

    //Triggered when the channel has a read event
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {

        ByteBuf buf = (ByteBuf) msg;
        System.out.println("Messages replied by the server:" + buf.toString(CharsetUtil.UTF_8));
        System.out.println("Server address: "+ ctx.channel().remoteAddress());
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }
}

Protobuf quick start instance 2

  1. Write a program and use Protobuf to complete the following functions
  2. The client can randomly send StudentPoJo / WorkerPoJo objects to the server (encoded by Protobuf)
  3. The server can receive StudentPoJo / WorkerPoJo objects (which type needs to be judged) and display information (decoded by Protobuf)

proto

syntax = "proto3";
option optimize_for = SPEED; // Speed up parsing
option java_package="com.atguigu.netty.codec2";   //Specify which package to generate
option java_outer_classname="MyDataInfo"; // External class name, file name


/*
1.protobuf You can use message to manage other messages. Finally decide which message to use as the transmission object
2.Suppose you need to transfer 20 objects in a project, you can't create 20 proto files. Now you can
 Define 20 messages in a file, and finally use a total message (for example, MyMessage here)
To determine which object needs to be transmitted in the actual transmission
3.Because you are actually transmitting an object in most cases, the following restrictions are made with oneof
4.Can I transfer multiple objects? Personally, I think it's OK. For example, I can use map (I don't know much about the syntax of proto at present)
 */
message MyMessage {

    //Define an enumeration type. If DataType is 0, it represents a Student object instance. The name DataType is user-defined
    enum DataType {
        StudentType = 0; //In proto3, the number of enum is required to start from 0
        WorkerType = 1;
    }

    //Use data_type to identify which enumeration type is passed. Here we really start to define the data type of MyMessage
    DataType data_type = 1;  //All subsequent numbers are just numbers

    /*
    1.oneof Keyword indicates that at most one object can be transferred each time an enumeration type is transferred.
    dataBody The name is also custom
    2.Why is the serial number here 2? Because datatype data above_ Type = 1 is the first sequence number
    3.MyMessage There are only two types that really appear in
      ①DataType type
      ②Student Type or Worker type (only one of these two will appear during real transmission)
    */
    oneof dataBody {
        Student student = 2;  //Note that the numbers behind this are just numbers
        Worker worker = 3;
    }


}


message Student {
    int32 id = 1;//Properties of Student class
    string name = 2; //
}
message Worker {
    string name=1;
    int32 age=2;
}

NettyServer

package com.atguigu.netty.codec2;

import com.atguigu.netty.codec.StudentPOJO;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.protobuf.ProtobufDecoder;

public class NettyServer {
    public static void main(String[] args) throws Exception {

        
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup(); //8



        try {
            //Create a server-side startup object and configure parameters
            ServerBootstrap bootstrap = new ServerBootstrap();

            //Use chain programming to set
            bootstrap.group(bossGroup, workerGroup) //Set up two thread groups
                    .channel(NioServerSocketChannel.class) //NioSocketChannel is used as the channel implementation of the server
                    .option(ChannelOption.SO_BACKLOG, 128) // Set the number of connections obtained from the thread queue
                    .childOption(ChannelOption.SO_KEEPALIVE, true) //Set keep active connection status
//                    . handler(null) / / this handler corresponds to bossgroup and childhandler corresponds to workerGroup
                    .childHandler(new ChannelInitializer<SocketChannel>() {//Create a channel initialization object (anonymous object)
                        //Set processor for pipeline
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {


                            ChannelPipeline pipeline = ch.pipeline();
                            //Add ProtoBufDecoder to pipeline
                            //Specifies which object to decode
                            pipeline.addLast("decoder", new ProtobufDecoder(MyDataInfo.MyMessage.getDefaultInstance()));
                            pipeline.addLast(new NettyServerHandler());
                        }
                    }); // Set the processor for the pipeline corresponding to the EventLoop of our workerGroup

            System.out.println(".....The server is ready...");

            //Bind a port and synchronize to generate a ChannelFuture object
            //Start the server (and bind the port)
            ChannelFuture cf = bootstrap.bind(6668).sync();

            //Register a listener for cf to monitor events we care about

            cf.addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) throws Exception {
                    if (cf.isSuccess()) {
                        System.out.println("Listening port 6668 succeeded");
                    } else {
                        System.out.println("Listening port 6668 failed");
                    }
                }
            });


            //Monitor closed channels
            cf.channel().closeFuture().sync();
        }finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }

    }

}

NettyServerHandler

package com.atguigu.netty.codec2;

import com.atguigu.netty.codec.StudentPOJO;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.util.CharsetUtil;


//public class NettyServerHandler extends ChannelInboundHandlerAdapter {
public class NettyServerHandler extends SimpleChannelInboundHandler<MyDataInfo.MyMessage> {


    //Read the actual data (here we can read the messages sent by the client)
    /*
    1. ChannelHandlerContext ctx:Context object, including pipeline, channel and address
    2. Object msg: This is the default Object for the data sent by the client
     */
    @Override
    public void channelRead0(ChannelHandlerContext ctx, MyDataInfo.MyMessage msg) throws Exception {

        //Display different information according to dataType

        MyDataInfo.MyMessage.DataType dataType = msg.getDataType();
        if(dataType == MyDataInfo.MyMessage.DataType.StudentType) {

            MyDataInfo.Student student = msg.getStudent();
            System.out.println("student id=" + student.getId() + " Student name=" + student.getName());

        } else if(dataType == MyDataInfo.MyMessage.DataType.WorkerType) {
            MyDataInfo.Worker worker = msg.getWorker();
            System.out.println("Worker's name=" + worker.getName() + " Age=" + worker.getAge());
        } else {
            System.out.println("The type of transmission is incorrect");
        }


    }


    //Data reading completed
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {

        //writeAndFlush is write + flush
        //Write data to cache and refresh
        //Generally speaking, we encode the data sent
        ctx.writeAndFlush(Unpooled.copiedBuffer("hello, client~(>^ω^<)Meow 1", CharsetUtil.UTF_8));
    }

    //To handle exceptions, it is generally necessary to close the channel

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        ctx.close();
    }
}

NettyClient

package com.atguigu.netty.codec2;

import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.protobuf.ProtobufEncoder;

public class NettyClient {
    public static void main(String[] args) throws Exception {

        //The client needs an event loop group
        EventLoopGroup group = new NioEventLoopGroup();


        try {
            //Create client startup object
            //Note that the client uses Bootstrap instead of ServerBootstrap
            Bootstrap bootstrap = new Bootstrap();

            //Set relevant parameters
            bootstrap.group(group) //Set thread group
                    .channel(NioSocketChannel.class) // Set the implementation class of the client channel (reflection)
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ChannelPipeline pipeline = ch.pipeline();
                            //Add ProtoBufEncoder to pipeline
                            pipeline.addLast("encoder", new ProtobufEncoder());
                            pipeline.addLast(new NettyClientHandler()); //Add your own processor
                        }
                    });

            System.out.println("client ok..");

            //Start the client to connect to the server
            //The analysis of ChannelFuture involves the asynchronous model of netty
            ChannelFuture channelFuture = bootstrap.connect("127.0.0.1", 6668).sync();
            //Monitor the closed channel
            channelFuture.channel().closeFuture().sync();
        }finally {

            group.shutdownGracefully();

        }
    }
}

NettyClientHandler

package com.atguigu.netty.codec2;

import com.atguigu.netty.codec.StudentPOJO;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.util.CharsetUtil;

import java.util.Random;

public class NettyClientHandler extends ChannelInboundHandlerAdapter {

    //This method is triggered when the channel is ready
    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {

        //Send Student or worker objects randomly
        int random = new Random().nextInt(3);
        MyDataInfo.MyMessage myMessage = null;

        if(0 == random) { //Send Student object

            myMessage = MyDataInfo.MyMessage.newBuilder().setDataType(MyDataInfo.MyMessage.DataType.StudentType).setStudent(MyDataInfo.Student.newBuilder().setId(5).setName("Yu Qilin Lu Junyi").build()).build();
        } else { // Send a Worker object

            myMessage = MyDataInfo.MyMessage.newBuilder().setDataType(MyDataInfo.MyMessage.DataType.WorkerType).setWorker(MyDataInfo.Worker.newBuilder().setAge(20).setName("Lao Li").build()).build();
        }

        ctx.writeAndFlush(myMessage);
    }

    //Triggered when the channel has a read event
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {

        ByteBuf buf = (ByteBuf) msg;
        System.out.println("Messages replied by the server:" + buf.toString(CharsetUtil.UTF_8));
        System.out.println("Server address: "+ ctx.channel().remoteAddress());
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }
}

Netty codec and Handler calling mechanism

Basic description

  1. Component design of Netty: the main components of Netty include Channel, EventLoop, ChannelFuture, ChannelHandler, ChannelPipe, etc
  2. ChannelHandler acts as a container for application logic that handles inbound and outbound data. For example, by implementing the ChannelInboundHandler interface (or ChannelInboundHandlerAdapter), you can receive inbound events and data, which will be processed by the business logic. When you want to send a response to the client, you can also flush the data from ChannelInboundHandler. Business logic is usually written in one or more channelinboundhandlers. The principle of ChannelOutboundHandler is the same, but it is used to process outbound data
  3. ChannelPipeline provides a container for the ChannelHandler chain. Taking the client application as an example, if the movement direction of events is from the client to the server, we call these events outbound, that is, the data sent by the client to the server will pass through a series of channeloutboundhandlers in the pipeline and be processed by these handlers, otherwise it will be called inbound

If you are confused about outbound and inbound, look at the calling mechanism of Netty's handler chain below and explain it clearly through an example and figure

Codec

  1. When Netty sends or receives a message, a data conversion will occur. The inbound message will be decoded: from byte to another format (such as java object); If it is an outbound message, it will be encoded into bytes.
  2. Netty provides a series of practical codecs. They all implement the ChannelInboundHadnler or ChannelOutboundHandler interface. In these classes, the channelRead method has been overridden. Taking inbound as an example, this method will be called for each message read from inbound Channel. Then, it will call the decode() method provided by the decoder to decode and forward the decoded bytes to the next ChannelInboundHandler in the ChannelPipeline.

Decoder - ByteToMessageDecoder

  1. Relational inheritance diagram
  1. Since it is impossible to know whether the remote node will send a complete message at one time, TCP may have the problem of sticking and unpacking. This class will buffer the inbound data until it is ready to be processed [later, we will talk about TCP packet sticking and unpacking]
  2. An example analysis of ByteToMessageDecoder

Calling mechanism of Netty's handler chain

Example requirements:

  1. Use a custom encoder and decoder to illustrate Netty's handler calling mechanism
    Client sends long - > server
    Server sends long - > client

Readers can take a look at this picture and take this picture to see the following example.

MyServer

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;

public class MyServer {
    public static void main(String[] args) throws Exception{

        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();

        try {

            ServerBootstrap serverBootstrap = new ServerBootstrap();
            serverBootstrap.group(bossGroup,workerGroup).channel(NioServerSocketChannel.class).childHandler(new MyServerInitializer()); //Customize an initialization class


            ChannelFuture channelFuture = serverBootstrap.bind(7000).sync();
            channelFuture.channel().closeFuture().sync();

        }finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }

    }
}

MyServerInitializer

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;


public class MyServerInitializer extends ChannelInitializer<SocketChannel> {

    @Override
    protected void initChannel(SocketChannel ch) throws Exception {
        ChannelPipeline pipeline = ch.pipeline();//Next breakpoint

        //The inbound handler decodes MyByteToLongDecoder
        pipeline.addLast(new MyByteToLongDecoder());
        //Encode the outbound handler
        pipeline.addLast(new MyLongToByteEncoder());
        //Custom handler handles business logic
        pipeline.addLast(new MyServerHandler());
        System.out.println("xx");
    }
}

MyServerHandler

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;

public class MyServerHandler extends SimpleChannelInboundHandler<Long> {
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, Long msg) throws Exception {

        System.out.println("From client" + ctx.channel().remoteAddress() + " Read long " + msg);

        //Send a long to the client
        ctx.writeAndFlush(98765L);
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }
}

MyClient

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;

public class MyClient {
    public static void main(String[] args)  throws  Exception{

        EventLoopGroup group = new NioEventLoopGroup();

        try {

            Bootstrap bootstrap = new Bootstrap();
            bootstrap.group(group).channel(NioSocketChannel.class)
                    .handler(new MyClientInitializer()); //Customize an initialization class

            ChannelFuture channelFuture = bootstrap.connect("localhost", 7000).sync();

            channelFuture.channel().closeFuture().sync();

        }finally {
            group.shutdownGracefully();
        }
    }
}

MyClientInitializer

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;


public class MyClientInitializer extends ChannelInitializer<SocketChannel> {
    @Override
    protected void initChannel(SocketChannel ch) throws Exception {

        ChannelPipeline pipeline = ch.pipeline();

        //Add an outbound handler to encode the data
        pipeline.addLast(new MyLongToByteEncoder());

        //At this time, an inbound decoder (inbound handler)
        pipeline.addLast(new MyByteToLongDecoder());
        //Add a custom handler to handle business
        pipeline.addLast(new MyClientHandler());


    }
}

MyClientHandler

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.util.CharsetUtil;

import java.nio.charset.Charset;

public class MyClientHandler  extends SimpleChannelInboundHandler<Long> {
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, Long msg) throws Exception {

        System.out.println("Server ip=" + ctx.channel().remoteAddress());
        System.out.println("Server message received=" + msg);

    }

    //Rewrite channelActive to send data

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        System.out.println("MyClientHandler send data");
        //ctx.writeAndFlush(Unpooled.copiedBuffer(""))
        ctx.writeAndFlush(123456L); //Sent a long
    }
}

MyByteToLongDecoder

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;

import java.util.List;

public class MyByteToLongDecoder extends ByteToMessageDecoder {
    /**
     *
     * decode It will be called many times according to the received data until it is determined that no new element is added to the list
     * , Or ByteBuf has no more readable bytes
     * If the list out is not empty, the contents of the list will be passed to the next channelinboundhandler for processing,
     * The method of the processor will also be called multiple times
     *
     * @param ctx Context object
     * @param in Inbound ByteBuf
     * @param out List Set to transfer the decoded data to the next handler
     * @throws Exception
     */
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {

        System.out.println("MyByteToLongDecoder Called");
        //Because long has 8 bytes, you need to judge that there are 8 bytes before you can read a long
        if(in.readableBytes() >= 8) {
            out.add(in.readLong());
        }
    }
}

MyLongToByteEncoder

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToByteEncoder;

public class MyLongToByteEncoder extends MessageToByteEncoder<Long> {
    //coding method 
    @Override
    protected void encode(ChannelHandlerContext ctx, Long msg, ByteBuf out) throws Exception {

        System.out.println("MyLongToByteEncoder encode Called");
        System.out.println("msg=" + msg);
        out.writeLong(msg);

    }
}

effect

Outbound inbound

Many people may be a little confused about outbound and inbound
1) The client has outbound and inbound, and the server also has outbound and inbound
2) Taking the client as an example, if the data transmitted by the server reaches the client, it is inbound for the client;
If the client transmits data to the server, it is outbound for the client;
Similarly, for the server, it is the same. If there is data, it is inbound, and if there is data output, it is outbound
3) Why does the Serverhandler of the server and the client inherit the SimpleChannelInboundHandler instead of the ChannelOutboundHandler outbound class?
Actually, when we call ctx. in handler, After the writeandflush () method, the data will be handed over to the ChannelOutboundHandler for outbound processing, but we have not defined the outbound class. If necessary, we can implement the ChannelOutboundHandler outbound class ourselves
4) The summary is that both the client and the server have outbound and inbound operations
**The server sends data to the client: * * server - > outbound - > socket channel - > inbound - > client

* * the client sends data to the server: * * client - > outbound - > socket channel - > inbound - > server

The following is the figure given by Netty's official source code. Personally, I don't think it's easy to understand. The figure above is easier to understand

[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-zkF3thCM-1619075857913)(image/chapter08_05.png)]

Small details of ByteToMessageDecoder

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.util.CharsetUtil;

import java.nio.charset.Charset;

public class MyClientHandler  extends SimpleChannelInboundHandler<Long> {
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, Long msg) throws Exception {

        System.out.println("Server ip=" + ctx.channel().remoteAddress());
        System.out.println("Server message received=" + msg);

    }

    //Rewrite channelActive to send data

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        System.out.println("MyClientHandler send data");

        //analysis
        //1. "ABCD ABCD ABCD" is 16 bytes
        ctx.writeAndFlush(Unpooled.copiedBuffer("abcdabcdabcdabcd",CharsetUtil.UTF_8));

    }
}

package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;

import java.util.List;

public class MyByteToLongDecoder extends ByteToMessageDecoder {
    /**
     *
     * decode It will be called many times according to the received data until it is determined that no new element is added to the list
     * , Or ByteBuf has no more readable bytes
     * If the list out is not empty, the contents of the list will be passed to the next channelinboundhandler for processing,
     * The method of the processor will also be called multiple times
     *
     * @param ctx Context object
     * @param in Inbound ByteBuf
     * @param out List Set to transfer the decoded data to the next handler
     * @throws Exception
     */
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {

        System.out.println("MyByteToLongDecoder Called");
        //Because long has 8 bytes, you need to judge that there are 8 bytes before you can read a long
        if(in.readableBytes() >= 8) {
            out.add(in.readLong());
        }
    }
}
  1. Since the sent string is 16 bytes, according to the above comments, decode will be called twice

The verification results are shown in the figure below:

  1. At the same time, it leads to a small problem

    When MyClientHandler passes a Long, it will call the encoder of our MyLongToByteEncoder. Then the console will print the following sentence: mylongtobyteender encode is called. But the encoder is not called here. Why?

    1. MyClientHandler the last processor of this processor is MyLongToByteEncoder
    2. The parent class of MyLongToByteEncoder is MessageToByteEncoder. There is one of the following methods in MessageToByteEncoder
@Override
    public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
        ByteBuf buf = null;
        try {
            //Here, we will judge whether the current msg is the type that should be processed. If so, we will process it and skip encode
            if (acceptOutboundMessage(msg)) {
                @SuppressWarnings("unchecked")
                I cast = (I) msg;
                buf = allocateBuffer(ctx, cast, preferDirect);
                try {
                    encode(ctx, cast, buf);
                } finally {
                    ReferenceCountUtil.release(cast);
                }

                if (buf.isReadable()) {
                    ctx.write(buf, promise);
                } else {
                    buf.release();
                    ctx.write(Unpooled.EMPTY_BUFFER, promise);
                }
                buf = null;
            } else {
                ctx.write(msg, promise);
            }
        } catch (EncoderException e) {
            throw e;
        } catch (Throwable e) {
            throw new EncoderException(e);
        } finally {
            if (buf != null) {
                buf.release();
            }
        }
    }

  1. When we send data in this form
ctx.writeAndFlush(Unpooled.copiedBuffer("abcdabcdabcdabcd",CharsetUtil.UTF_8));

If the two types do not match, the Encoder will not go. Therefore, when we write Encoder, we should pay attention to the consistency between the incoming data type and the processed data type

Conclusion:

  • Whether the decoder handler or the encoder handler, the received message type must be consistent with the message type to be processed, otherwise the handler will not be executed
  • When the decoder decodes data, it is necessary to judge whether the data in the buffer (ByteBuf) is sufficient, otherwise the received results will be expected, and the results may be inconsistent.

Decoder - ReplayingDecoder

  1. public abstract class ReplayingDecoder<S> extends ByteToMessageDecoder
  2. ReplayingDecoder extends the ByteToMessageDecoder class. Using this class, we don't have to call the readableBytes() method, and we don't have to judge whether there is enough data to read. Parameter S specifies the type of user state management, where Void represents that state management is not required
  3. Application example: use ReplayingDecoder to write decoder and simplify the previous case [case demonstration]
package com.atguigu.netty.inboundhandlerandoutboundhandler;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ReplayingDecoder;

import java.util.List;

public class MyByteToLongDecoder2 extends ReplayingDecoder<Void> {
    
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
        System.out.println("MyByteToLongDecoder2 Called");
        //In the ReplayingDecoder, it is not necessary to judge whether the data is sufficient to read, but will be processed and judged internally
        out.add(in.readLong());
    }
}
  1. ReplayingDecoder is easy to use, but it also has some limitations:
    • Not all ByteBuf operations are supported. If an unsupported method is called, an unsupported operationexception will be thrown.
    • ReplayingDecoder may be slightly slower than ByteToMessageDecoder in some cases. For example, when the network is slow and the message format is complex, the message will be broken into multiple fragments and the speed will become slower

Other codecs

  1. LineBasedFrameDecoder: this class is also used inside Netty. It uses the end of line control character (\ n or \ r\n) as the separator to parse data.
  2. DelimiterBasedFrameDecoder: uses custom special characters as message separators.
  3. HttpObjectDecoder: a decoder for HTTP data
  4. LengthFieldBasedFrameDecoder: identifies the whole package message by specifying the length, so that sticky and half package messages can be processed automatically.

Log4j integrated into Netty

  1. Add the dependency on Log4j in Maven. POM xml
<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.17</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.7.25</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.25</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-simple</artifactId>
    <version>1.7.25</version>
    <scope>test</scope>
</dependency>
  1. Configure Log4j in resources / Log4j properties
log4j.rootLogger=DEBUG,stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%p]%C{1}-%m%n
  1. Demo integration

TCP packet sticking and unpacking and Solutions

Basic introduction to TCP packet sticking and unpacking

  1. TCP is connection oriented and flow oriented, providing high reliability services. There should be one-to-one pairs of socket s at both ends of the transceiver (client and server). Therefore, in order to send multiple packets to the receiver more effectively to the other party, the sender uses the optimization method (Nagle algorithm), which combines the data with small interval and small amount of data into a large data block, and then packets. Although this improves the efficiency, it is difficult for the receiver to distinguish the complete data packet, because the flow oriented communication has no message protection boundary
  2. Since TCP has no message protection boundary, it is necessary to deal with the message boundary problem at the receiving end, that is, what we call packet sticking and unpacking. Look at a picture
  3. TCP packet sticking and unpacking diagram

Suppose the client sends two data packets D1 and D2 to the server respectively. Since the number of bytes read by the server at one time is uncertain, there may be the following four situations:

  1. The server reads two independent data packets, D1 and D2, without sticking and unpacking
  2. The server receives two data packets at one time. D1 and D2 are bonded together, which is called TCP sticky packet
  3. The server reads the data packet twice. For the first time, it reads the complete D1 packet and part of the D2 packet, and for the second time, it reads the rest of the D2 packet, which is called TCP unpacking
  4. The server reads the data packet twice, and reads part of D1 packet for the first time_ 1. Read the rest of D1 package for the second time_ 2 and complete D2 package.

Examples of TCP packet sticking and unpacking

When writing Netty program, if it is not handled, the problem of sticking and unpacking will occur

Take a specific example:

MyServer

package com.atguigu.netty.tcp;


import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;

public class MyServer {
    public static void main(String[] args) throws Exception{

        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();

        try {

            ServerBootstrap serverBootstrap = new ServerBootstrap();
            serverBootstrap.group(bossGroup,workerGroup).channel(NioServerSocketChannel.class).childHandler(new MyServerInitializer()); //Customize an initialization class


            ChannelFuture channelFuture = serverBootstrap.bind(7000).sync();
            channelFuture.channel().closeFuture().sync();

        }finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }

    }
}

MyServerInitializer

package com.atguigu.netty.tcp;



import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;


public class MyServerInitializer extends ChannelInitializer<SocketChannel> {

    @Override
    protected void initChannel(SocketChannel ch) throws Exception {
        ChannelPipeline pipeline = ch.pipeline();

        pipeline.addLast(new MyServerHandler());
    }
}

MyServerHandler

package com.atguigu.netty.tcp;

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;

import java.nio.charset.Charset;
import java.util.UUID;

public class MyServerHandler extends SimpleChannelInboundHandler<ByteBuf>{
    private int count;

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        //cause.printStackTrace();
        ctx.close();
    }

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception {

        byte[] buffer = new byte[msg.readableBytes()];
        msg.readBytes(buffer);

        //Convert buffer to string
        String message = new String(buffer, Charset.forName("utf-8"));

        System.out.println("Server received data " + message);
        System.out.println("Amount of messages received by the server=" + (++this.count));

        //The server sends back data to the client and a random id,
        ByteBuf responseByteBuf = Unpooled.copiedBuffer(UUID.randomUUID().toString() + " ", Charset.forName("utf-8"));
        ctx.writeAndFlush(responseByteBuf);

    }
}

MyClient

package com.atguigu.netty.tcp;



import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;

public class MyClient {
    public static void main(String[] args)  throws  Exception{

        EventLoopGroup group = new NioEventLoopGroup();

        try {

            Bootstrap bootstrap = new Bootstrap();
            bootstrap.group(group).channel(NioSocketChannel.class)
                    .handler(new MyClientInitializer()); //Customize an initialization class

            ChannelFuture channelFuture = bootstrap.connect("localhost", 7000).sync();

            channelFuture.channel().closeFuture().sync();

        }finally {
            group.shutdownGracefully();
        }
    }
}

MyClientInitializer

package com.atguigu.netty.tcp;

import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;


public class MyClientInitializer extends ChannelInitializer<SocketChannel> {
    @Override
    protected void initChannel(SocketChannel ch) throws Exception {

        ChannelPipeline pipeline = ch.pipeline();
        pipeline.addLast(new MyClientHandler());
    }
}

MyClientHandler

package com.atguigu.netty.tcp;

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;

import java.nio.charset.Charset;

public class MyClientHandler extends SimpleChannelInboundHandler<ByteBuf> {

    private int count;
    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        //Use the client to send 10 pieces of data, hello,server number
        for(int i= 0; i< 10; ++i) {
            ByteBuf buffer = Unpooled.copiedBuffer("hello,server " + i, Charset.forName("utf-8"));
            ctx.writeAndFlush(buffer);
        }
    }

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception {
        byte[] buffer = new byte[msg.readableBytes()];
        msg.readBytes(buffer);

        String message = new String(buffer, Charset.forName("utf-8"));
        System.out.println("Client received message=" + message);
        System.out.println("Number of messages received by the client=" + (++this.count));

    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }
}

effect

First run:

Client

Server

Second run:

Client

Server

It can be seen that during the first run, the server received all 10 data at one time and six times during the second run, which vividly shows the packet sticking phenomenon of TCP.

TCP packet sticking and unpacking solutions

  1. Common solution: use custom protocol + codec to solve the problem
  2. The key is to solve the problem of the length of data read by the server every time. If this problem is solved, there will be no problem that the server reads more or less data, so as to avoid TCP packet sticking and unpacking.

Take a concrete example

  1. The client is required to send five Message objects, and the client sends one Message object at a time
  2. Each time the server receives a Message, it decodes it five times. Every time it reads a Message, it will reply a Message object to the client.

MessageProtocol

package com.atguigu.netty.protocoltcp;


//Protocol package
public class MessageProtocol {
    private int len; //crux
    private byte[] content;

    public int getLen() {
        return len;
    }

    public void setLen(int len) {
        this.len = len;
    }

    public byte[] getContent() {
        return content;
    }

    public void setContent(byte[] content) {
        this.content = content;
    }
}

MyServer

package com.atguigu.netty.protocoltcp;


import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;

public class MyServer {
    public static void main(String[] args) throws Exception{

        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();

        try {

            ServerBootstrap serverBootstrap = new ServerBootstrap();
            serverBootstrap.group(bossGroup,workerGroup).channel(NioServerSocketChannel.class).childHandler(new MyServerInitializer()); //Customize an initialization class


            ChannelFuture channelFuture = serverBootstrap.bind(7000).sync();
            channelFuture.channel().closeFuture().sync();

        }finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }

    }
}

MyServerInitializer

package com.atguigu.netty.protocoltcp;



import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;


public class MyServerInitializer extends ChannelInitializer<SocketChannel> {

    @Override
    protected void initChannel(SocketChannel ch) throws Exception {
        ChannelPipeline pipeline = ch.pipeline();

        pipeline.addLast(new MyMessageDecoder());//decoder
        pipeline.addLast(new MyMessageEncoder());//encoder
        pipeline.addLast(new MyServerHandler());
    }
}

MyServerHandler

package com.atguigu.netty.protocoltcp;

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;

import java.nio.charset.Charset;
import java.util.UUID;


//handler for processing business
public class MyServerHandler extends SimpleChannelInboundHandler<MessageProtocol>{
    private int count;

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        //cause.printStackTrace();
        ctx.close();
    }

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, MessageProtocol msg) throws Exception {

        //Data received and processed
        int len = msg.getLen();
        byte[] content = msg.getContent();

        System.out.println("The server received the following information");
        System.out.println("length=" + len);
        System.out.println("content=" + new String(content, Charset.forName("utf-8")));

        System.out.println("Number of message packets received by the server=" + (++this.count));

        //Reply message
        System.out.println("The server starts to reply to the message------");
        String responseContent = UUID.randomUUID().toString();
        int responseLen = responseContent.getBytes("utf-8").length;
        byte[]  responseContent2 = responseContent.getBytes("utf-8");
        //Build a protocol package
        MessageProtocol messageProtocol = new MessageProtocol();
        messageProtocol.setLen(responseLen);
        messageProtocol.setContent(responseContent2);

        ctx.writeAndFlush(messageProtocol);


    }
}

MyClient

package com.atguigu.netty.protocoltcp;



import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioSocketChannel;

public class MyClient {
    public static void main(String[] args)  throws  Exception{

        EventLoopGroup group = new NioEventLoopGroup();

        try {

            Bootstrap bootstrap = new Bootstrap();
            bootstrap.group(group).channel(NioSocketChannel.class)
                    .handler(new MyClientInitializer()); //Customize an initialization class

            ChannelFuture channelFuture = bootstrap.connect("localhost", 7000).sync();

            channelFuture.channel().closeFuture().sync();

        }finally {
            group.shutdownGracefully();
        }
    }
}

MyClientInitializer

package com.atguigu.netty.protocoltcp;

import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;


public class MyClientInitializer extends ChannelInitializer<SocketChannel> {
    @Override
    protected void initChannel(SocketChannel ch) throws Exception {

        ChannelPipeline pipeline = ch.pipeline();
        pipeline.addLast(new MyMessageEncoder()); //Add encoder
        pipeline.addLast(new MyMessageDecoder()); //Add decoder
        pipeline.addLast(new MyClientHandler());
    }
}

MyClientHandler

package com.atguigu.netty.protocoltcp;

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;

import java.nio.charset.Charset;

public class MyClientHandler extends SimpleChannelInboundHandler<MessageProtocol> {

    private int count;
    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        //Use the client to send 10 pieces of data "it's cold today, eat hot pot" number

        for(int i = 0; i< 5; i++) {
            String mes = "It's cold today. Eat hot pot";
            byte[] content = mes.getBytes(Charset.forName("utf-8"));
            int length = mes.getBytes(Charset.forName("utf-8")).length;

            //Create protocol package object
            MessageProtocol messageProtocol = new MessageProtocol();
            messageProtocol.setLen(length);
            messageProtocol.setContent(content);
            ctx.writeAndFlush(messageProtocol);

        }

    }

//    @Override
    protected void channelRead0(ChannelHandlerContext ctx, MessageProtocol msg) throws Exception {

        int len = msg.getLen();
        byte[] content = msg.getContent();

        System.out.println("The client receives the following message");
        System.out.println("length=" + len);
        System.out.println("content=" + new String(content, Charset.forName("utf-8")));

        System.out.println("Number of messages received by the client=" + (++this.count));

    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        System.out.println("Exception message=" + cause.getMessage());
        ctx.close();
    }
}

MyMessageDecoder

package com.atguigu.netty.protocoltcp;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ReplayingDecoder;

import java.util.List;

public class MyMessageDecoder extends ReplayingDecoder<Void> {
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
        System.out.println();
        System.out.println();
        System.out.println("MyMessageDecoder decode Called");
        //You need to get binary bytecode - > messageprotocol packet (object)
        int length = in.readInt();

        byte[] content = new byte[length];
        in.readBytes(content);

        //Encapsulated into a MessageProtocol object, put it into out and pass it to the next handler for business processing
        MessageProtocol messageProtocol = new MessageProtocol();
        messageProtocol.setLen(length);
        messageProtocol.setContent(content);

        //Put out and pass it to the next hanlder for processing
        out.add(messageProtocol);

    }
}

MyMessageEncoder

package com.atguigu.netty.protocoltcp;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToByteEncoder;

public class MyMessageEncoder extends MessageToByteEncoder<MessageProtocol> {
    @Override
    protected void encode(ChannelHandlerContext ctx, MessageProtocol msg, ByteBuf out) throws Exception {
        System.out.println("MyMessageEncoder encode Method called");
        out.writeInt(msg.getLen());
        out.writeBytes(msg.getContent());
    }
}

effect

Client output

MyMessageEncoder encode Method called
MyMessageEncoder encode Method called
MyMessageEncoder encode Method called
MyMessageEncoder encode Method called
MyMessageEncoder encode Method called


//The following is the reply that the client receives from the server one by one
MyMessageDecoder decode Called
 The client receives the following message
 length=36
 content=1b5286dd-0fc2-4f62-9bf7-d5fad84179b5
 Number of messages received by the client=1


MyMessageDecoder decode Called
 The client receives the following message
 length=36
 content=653d18cb-ab72-4163-8b95-09c94ecac873
 Number of messages received by the client=2


MyMessageDecoder decode Called
 The client receives the following message
 length=36
 content=3be6e403-91bb-4437-ada8-6cdb9eb7ef00
 Number of messages received by the client=3


MyMessageDecoder decode Called
 The client receives the following message
 length=36
 content=94c8f306-fd9c-455a-956c-16698ce4150b
 Number of messages received by the client=4


MyMessageDecoder decode Called
 The client receives the following message
 length=36
 content=7890de9c-0fa2-4317-8de1-1d464315fa1b
 Number of messages received by the client=5

Server output

MyMessageDecoder decode Called
 The server received the following information
 length=27
 content=It's cold today. Eat hot pot
 Number of message packets received by the server=1
 The server starts to reply to the message------
MyMessageEncoder encode Method called


MyMessageDecoder decode Called
 The server received the following information
 length=27
 content=It's cold today. Eat hot pot
 Number of message packets received by the server=2
 The server starts to reply to the message------
MyMessageEncoder encode Method called


MyMessageDecoder decode Called
 The server received the following information
 length=27
 content=It's cold today. Eat hot pot
 Number of message packets received by the server=3
 The server starts to reply to the message------
MyMessageEncoder encode Method called


MyMessageDecoder decode Called
 The server received the following information
 length=27
 content=It's cold today. Eat hot pot
 Number of message packets received by the server=4
 The server starts to reply to the message------
MyMessageEncoder encode Method called


MyMessageDecoder decode Called
 The server received the following information
 length=27
 content=It's cold today. Eat hot pot
 Number of message packets received by the server=5
 The server starts to reply to the message------
MyMessageEncoder encode Method called

No matter how many times it runs, the Server receives it in five times, which solves the problem of TCP packet sticking.

Implement simple RPC with Netty

Basic introduction to RPC

  1. RPC (Remote Procedure Call) - Remote Procedure Call, which is a computer communication protocol. The protocol allows a program running on one computer to call a subroutine of another computer, and the programmer does not need to program this interaction
  2. Two or more applications are distributed on different servers, and the calls between them are like local method calls (as shown in the figure)

Process:

  1. Caller, calling remote API

  2. Calling the remote API will pass through an RPC proxy (RpcProxy)

  3. RPC proxy calls rpcinvoker (this is the caller of PRC)

  4. RpcInvoker through RPC connector (RpcConnector)

  5. The RPC connector encodes the data with the PRC protocol specified by the two machines

  6. Then, the RPC connector sends it to the other party's PRC receiver through the RpcChannel channel

  7. The PRC receiver decodes the data through the PRC protocol

  8. Then the data is transmitted to RpcProcessor

  9. The RpcProcessor then passes it to RpcInvoker

  10. RpcInvoker calls Remote API

  11. Finally push to callee

  12. Common RPC frameworks include: well-known ones such as Dubbo of Alibaba, gRPC of Google, rpcx of Go, thrift of Apache, and Spring cloud of Spring.

Our RPC call flow chart

RPC call process description

  1. The service consumer (client) invokes the service locally
  2. After receiving the call, the client stub is responsible for encapsulating the methods and parameters into a message body capable of network transmission
  3. The client stub encodes the message and sends it to the server
  4. The server stub decodes the message after receiving it
  5. The server stub calls the local service according to the decoding result
  6. The local service executes and returns the result to the server stub
  7. The server stub encodes the returned import result and sends it to the consumer
  8. The client stub receives the message and decodes it
  9. The service consumer gets the result

Summary: the goal of RPC is to encapsulate 2 - 8 steps. Users do not need to care about these details and can complete remote service calls like calling local methods

Dubbo RPC has been implemented (based on Netty)

Requirement description

  1. The bottom layer of Dubbo uses Netty as the network communication framework, and requires Netty to implement a simple RPC framework
  2. Imitating Dubbo, the consumer and the provider agree on the interface and protocol. The consumer remotely calls the provider's service, the provider returns a string, and the consumer prints the data returned by the provider. Netty 4.1.20 is used for underlying network communication

Design description

  1. Create an interface and define abstract methods. Used for agreements between consumers and providers.
  2. Create a provider, which needs to listen to the consumer's request and return data according to the contract.
  3. To create a consumer, this class needs to call its own non-existent methods transparently, and internally needs to use Netty to request the provider to return data
  4. Analysis chart of development

code

Encapsulated RPC

This code can be understood as an encapsulated dubbo

NettyServer
package com.atguigu.netty.dubborpc.netty;


import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.string.StringDecoder;
import io.netty.handler.codec.string.StringEncoder;

public class NettyServer {


    public static void startServer(String hostName, int port) {
        startServer0(hostName,port);
    }

    //Write a method to complete the initialization and startup of NettyServer

    private static void startServer0(String hostname, int port) {

        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();

        try {

            ServerBootstrap serverBootstrap = new ServerBootstrap();

            serverBootstrap.group(bossGroup,workerGroup)
                    .channel(NioServerSocketChannel.class)
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                                      @Override
                                      protected void initChannel(SocketChannel ch) throws Exception {
                                          ChannelPipeline pipeline = ch.pipeline();
                                          pipeline.addLast(new StringDecoder());
                                          pipeline.addLast(new StringEncoder());
                                          pipeline.addLast(new NettyServerHandler()); //Service processor

                                      }
                                  }

                    );

            ChannelFuture channelFuture = serverBootstrap.bind(hostname, port).sync();
            System.out.println("Service provider starts providing services~~");
            channelFuture.channel().closeFuture().sync();

        }catch (Exception e) {
            e.printStackTrace();
        }
        finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }

    }
}

NettyServerHandler
package com.atguigu.netty.dubborpc.netty;


import com.atguigu.netty.dubborpc.customer.ClientBootstrap;
import com.atguigu.netty.dubborpc.provider.HelloServiceImpl;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

//The server handler is relatively simple
public class NettyServerHandler extends ChannelInboundHandlerAdapter {

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        System.out.println("---The server starts to receive the message from the customer order---");
        //Get the message sent by the client and call the service
        System.out.println("Original message:" + msg);

        /*
         1.When the client calls the api of the server, we need to define a protocol. For example, we require that every time we send a message, we
         "HelloService#hello# hello" must begin with a string. "
         2.Dubbo When you register in zookeeper, this is the full path string of the class. You use the zookeeper plug-in of IDEA
         You can clearly see
         */
        if(msg.toString().startsWith(ClientBootstrap.providerName)) {

            String result = new HelloServiceImpl().hello(msg.toString().substring(msg.toString().lastIndexOf("#") + 1));
            ctx.writeAndFlush(result);
        }
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        ctx.close();
    }
}

NettyClientHandler
package com.atguigu.netty.dubborpc.netty;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

import java.util.concurrent.Callable;

public class NettyClientHandler extends ChannelInboundHandlerAdapter implements Callable {

    private ChannelHandlerContext context;//context
    private String result; //Returned results
    private String para; //Parameters passed in when the client calls the method


    //After the connection with the server is created, it will be called. This method is the first to be called (1)
    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        System.out.println(" channelActive Called  ");
        context = ctx; //Because we will use ctx in other methods
    }

    //After receiving the data from the server, call the method (4).
    //
    @Override
    public synchronized void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        System.out.println(" channelRead Called  ");
        result = msg.toString();
        notify(); //Wake up waiting threads
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        ctx.close();
    }

    //Called by the proxy object, send data to the server, - > wait - > wait to be awakened (channelread) - > return result (3) - "5"
    @Override
    public synchronized Object call() throws Exception {
        System.out.println(" call1 Called  ");
        context.writeAndFlush(para);
        //wait
        wait(); //Wake up after waiting for the channelRead method to obtain the results of the server
        System.out.println(" call2 Called  ");
        return  result; //Results returned by the service provider

    }
    //(2)
    void setPara(String para) {
        System.out.println(" setPara  ");
        this.para = para;
    }
}

NettyClient
package com.atguigu.netty.dubborpc.netty;


import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.string.StringDecoder;
import io.netty.handler.codec.string.StringEncoder;

import java.lang.reflect.Proxy;
import java.util.concurrent.Executor;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class NettyClient {

    //Create thread pool
    private static ExecutorService executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());

    private static NettyClientHandler client;
    private int count = 0;

    //The writing method uses the proxy mode to obtain a proxy object

    public Object getBean(final Class<?> serivceClass, final String providerName) {

        return Proxy.newProxyInstance(Thread.currentThread().getContextClassLoader(),
                new Class<?>[]{serivceClass}, (proxy, method, args) -> {

                    System.out.println("(proxy, method, args) get into...." + (++count) + " second");
                    //The code of {} part will be entered every time the client calls hello
                    if (client == null) {
                        initClient();
                    }

                    //Set the information to be sent to the server
                    //providerName: protocol header, args[0]: the data to be sent by the client to the server
                    client.setPara(providerName + args[0]);

                    //
                    return executor.submit(client).get();

                });
    }

    //Initialize client
    private static void initClient() {
        client = new NettyClientHandler();
        //Create EventLoopGroup
        NioEventLoopGroup group = new NioEventLoopGroup();
        Bootstrap bootstrap = new Bootstrap();
        bootstrap.group(group)
                .channel(NioSocketChannel.class)
                .option(ChannelOption.TCP_NODELAY, true)
                .handler(
                        new ChannelInitializer<SocketChannel>() {
                            @Override
                            protected void initChannel(SocketChannel ch) throws Exception {
                                ChannelPipeline pipeline = ch.pipeline();
                                pipeline.addLast(new StringDecoder());
                                pipeline.addLast(new StringEncoder());
                                pipeline.addLast(client);
                            }
                        }
                );

        try {
            bootstrap.connect("127.0.0.1", 7000).sync();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Interface

package com.atguigu.netty.dubborpc.publicinterface;

//This is an interface required by both service providers and service consumers
public interface HelloService {

    String hello(String mes);
}

Server (provider)

HelloServiceImpl
package com.atguigu.netty.dubborpc.provider;

import com.atguigu.netty.dubborpc.publicinterface.HelloService;

public class HelloServiceImpl implements HelloService{

    private static int count = 0;
    //When a consumer calls this method, a result is returned
    @Override
    public String hello(String mes) {
        System.out.println("Client message received=" + mes);
        System.out.println();
        //Return different results according to mes
        if(mes != null) {
            return "Hello client, I have received your message. The message is:[" + mes + "] ,The first" + (++count) + " second \n";
        } else {
            return "Hello client, I have received your message ";
        }
    }
}

ServerBootstrap
package com.atguigu.netty.dubborpc.provider;

import com.atguigu.netty.dubborpc.netty.NettyServer;

//ServerBootstrap will start a service provider, NettyServer
public class ServerBootstrap {
    public static void main(String[] args) {

        //Code filling
        NettyServer.startServer("127.0.0.1", 7000);
    }
}

Client (consumer)

package com.atguigu.netty.dubborpc.customer;

import com.atguigu.netty.dubborpc.netty.NettyClient;
import com.atguigu.netty.dubborpc.publicinterface.HelloService;

public class ClientBootstrap {


    //The protocol header is defined here
    public static final String providerName = "HelloService#hello#";

    public static void main(String[] args) throws  Exception{

        //Create a consumer
        NettyClient customer = new NettyClient();

        //Create proxy object
        HelloService service = (HelloService) customer.getBean(HelloService.class, providerName);

        for (;; ) {
            Thread.sleep(2 * 1000);
            //Call the method (service) of the service provider through the proxy object
            String res = service.hello("Hello dubbo~");
            System.out.println("Result of call res= " + res);
        }
    }
}

Call procedure

  1. ClientBootstrap#main initiates a call
  2. Go to the following line of code
 HelloService service = (HelloService) customer.getBean(HelloService.class, providerName);
  1. Call NettyClient#getBean and establish a link with the server in this method.

  2. So NettyClientHandler#channelActive is executed

  3. Then go back to NettyClient#getBean and call NettyClientHandler#setPara. After calling, go back to NettyClient#getBean and submit the task with thread pool

  4. Because the task is submitted with the thread pool, it is ready to execute the NettyClientHandler#call thread task

  5. Send data to service provider in NettyClientHandler#call

    context.writeAndFlush(para);
    

    Since the data result from the service provider has not been received, the wait is not valid

  6. When you come to the service provider and receive data from the Socket channel, execute NettyServerHandler#channelRead, and then execute

    String result = new HelloServiceImpl().hello(msg.toString().substring(msg.toString().lastIndexOf("#") + 1));
    
  7. Execute the business logic in HelloServiceImpl#hello, return the data to NettyServerHandler#channelRead, and then send the data to the client

  8. NettyClientHandler#channelRead receives the data sent by the service provider and wakes up the previous wait thread

  9. Therefore, the previous wait thread wakes up from NettyClientHandler#call and returns the result to NettyClient#getBean

  10. Nettyclient #getbean() to the data, and this function call in ClientBootstrap#main returns to get the data provided by the server.

     String res = service.hello("Hello dubbo~");
    

13. At this point, an RPC call ends.

effect

Print bootstrap client

(proxy, method, args) get into....1 second
 setPara  
 channelActive Called  
 call1 Called  
 channelRead Called  
 call2 Called  
Result of call res= Hello client, I have received your message. The message is:[Hello dubbo~] ,1st time 

(proxy, method, args) get into....2 second
 setPara  
 call1 Called  
 channelRead Called  
 call2 Called  
Result of call res= Hello client, I have received your message. The message is:[Hello dubbo~] ,Second time 

(proxy, method, args) get into....3 second
 setPara  
 call1 Called  
 channelRead Called  
 call2 Called  
Result of call res= Hello client, I have received your message. The message is:[Hello dubbo~] ,3rd time 

(proxy, method, args) get into....4 second
 setPara  
 call1 Called  
 channelRead Called  
 call2 Called  
Result of call res= Hello client, I have received your message. The message is:[Hello dubbo~] ,4th time 

(proxy, method, args) get into....5 second
 setPara  
 call1 Called  
 channelRead Called  
 call2 Called  
Result of call res= Hello client, I have received your message. The message is:[Hello dubbo~] ,5th time 

ServerBootstrap print

Service provider starts providing services~~
---The server starts to receive the message from the customer order---
Original message: HelloService#hello#Hello, dubbo~
Client message received=Hello dubbo~

---The server starts to receive the message from the customer order---
Original message: HelloService#hello#Hello, dubbo~
Client message received=Hello dubbo~

---The server starts to receive the message from the customer order---
Original message: HelloService#hello#Hello, dubbo~
Client message received=Hello dubbo~

---The server starts to receive the message from the customer order---
Original message: HelloService#hello#Hello, dubbo~
Client message received=Hello dubbo~

---The server starts to receive the message from the customer order---
Original message: HelloService#hello#Hello, dubbo~
Client message received=Hello dubbo~

Topics: Netty