java nio message half package and sticky package solution

Posted by unklematt on Sun, 19 Apr 2020 14:59:08 +0200

java nio message half package and sticky package solution

Background of problem
NIO is buffer oriented, not stream oriented. We all know that since it's a buffer, it must have a fixed size. In this way, there are usually two problems:

Message packet sticking: when the buffer is large enough, multiple messages may be read into the buffer from the channel due to various reasons of network instability. At this time, if the boundary between packets cannot be distinguished, packet sticking will occur;
Incomplete message: if the message is not received completely, the buffer will be filled, which will lead to incomplete message taken out of the buffer, that is, half packet phenomenon.
Before introducing this issue, it is important to mention my overall code architecture.
Code see GitHub warehouse

https://github.com/CuriousLei/smyl-im

In this project, my NIO core library design flow chart is as follows

Introduction:

The server establishes a Connector object for each client connected to provide IO services;
The internal instance domain of the ioArgs object refers to the buffer as the buffer for data interaction with the channel directly;
Two thread pools control ioArgs to read and write respectively;
The relationship between connector and ioArgs: (1) input, thread pool processes read events, writes data to ioArgs, and calls back to connector; (2) output, connector writes data to ioArgs, and transfers ioArgs to Runnable object for thread pool processing;
Two selector threads listen for channel read and write events respectively. Event ready triggers the thread pool to work.
thinking
In this way, there are bound to be sticking and half wrapping problems. It is also easy to reproduce these two problems.

ioArgs sets the buffer a little smaller, sends a piece of data larger than this length, and the server will read it as two messages, that is, the message is incomplete;
In the thread code, add a Thread.sleep() delay to wait, and the client sends several messages continuously (the total length is less than the buffer size), which can also reproduce the phenomenon of packet sticking.
This problem is caused by the fact that the message body does not correspond to the buffer data one by one. So, how to solve it?

Fixed head scheme
The fixed head scheme can be adopted to solve this problem. The head is set with four bytes, which stores an int value and records the length of the data behind it. To mark a message body.

When reading data, read the data in the ioArgs buffer in order according to the length information of the header. If the length requirement is not met, continue to read the next ioArgs. In this way, there will be no sticking and half wrapping problems.
When outputting data, the same mechanism is used to encapsulate the data. The first four bytes record length.
In my project, the client and the server share a nio core package, namely niohdl, which can ensure the consistency of data format.

design scheme
To implement the above assumption, you must add a layer of Dispatcher class between connector and ioArgs to handle the conversion relationship between message body and buffer (the message body is named Packet). Depending on the input and output, they are called ReceiveDispatcher and SendDispatcher. That is, they are used to operate the conversion between Packet and ioArgs.

Packet
Define the message body. The inheritance relationship is shown in the following figure:

Packet is the base class. The code is as follows:

package cn.buptleida.niohdl.core;
import java.io.Closeable;
import java.io.IOException;
/**

  • Public data encapsulation
  • Provides definition of type and basic length
    */

public class Packet implements Closeable {

protected byte type;
protected int length;

public byte type(){
    return type;
}

public int length(){
    return length;
}

@Override
public void close() throws IOException {

}

}
SendPacket and ReceivePacket represent the sending message body and receiving message body respectively. StringReceivePacket and StringSendPacket represent the message of string class. This practice is limited to receiving and sending string messages. In the future, there may be files and so on, which need to be expanded.

The operation of byte array must be involved in the code. Therefore, take StringSendPacket as an example, you need to provide a method to convert String to byte []. The code is as follows:

package cn.buptleida.niohdl.box;
import cn.buptleida.niohdl.core.SendPacket;

public class StringSendPacket extends SendPacket {

private final byte[] bytes;

public StringSendPacket(String msg) {
    this.bytes = msg.getBytes();
    this.length = bytes.length;//Instance domain in parent class
}

@Override
public byte[] bytes() {
    return bytes;
}

}
SendDispatcher
A SendDispatcher object is referenced in the instance domain of the connector object. When sending data, the method in SendDispatcher will encapsulate and process the data. The general relationship is as follows:

The task queue queue is set in SendDispatcher. When a message needs to be sent, the connector writes the message to sendPacket, stores it in the queue, and executes the queue. Use the packetemp variable to refer to the elements of the queue, write the four byte length information and packetemp into the buffer of ioArgs, then judge whether the packetemp is completely written (mark and judge with position and total pointer), decide whether to continue outputting the contents of the packetemp or to start the next round of queuing.
The program diagram of this process is as follows:

In the code, SendDispatcher is actually an interface. I use AsyncSendDispatcher to implement this interface. The code is as follows:

package cn.buptleida.niohdl.impl.async;

import cn.buptleida.niohdl.core.*;
import cn.buptleida.utils.CloseUtil;

import java.io.IOException;
import java.util.Queue;
import java.util.concurrent.ConcurrentLinkedDeque;
import java.util.concurrent.atomic.AtomicBoolean;

public class AsyncSendDispatcher implements SendDispatcher {

private final AtomicBoolean isClosed = new AtomicBoolean(false);
private Sender sender;
private Queue<SendPacket> queue = new ConcurrentLinkedDeque<>();
private AtomicBoolean isSending = new AtomicBoolean();
private ioArgs ioArgs = new ioArgs();
private SendPacket packetTemp;
//Current packet size and progress sent
private int total;
private int position;

public AsyncSendDispatcher(Sender sender) {
    this.sender = sender;
}

/**
 * connector After encapsulating the data into the packet, call this method
 * @param packet
 */
@Override
public void send(SendPacket packet) {
    queue.offer(packet);//Put data in the queue
    if (isSending.compareAndSet(false, true)) {
        sendNextPacket();
    }
}

@Override
public void cancel(SendPacket packet) {

}

/**
 * Get data from queue
 * @return
 */
private SendPacket takePacket() {
    SendPacket packet = queue.poll();
    if (packet != null && packet.isCanceled()) {
        //It has been cancelled without sending
        return takePacket();
    }
    return packet;
}

private void sendNextPacket() {
    SendPacket temp = packetTemp;
    if (temp != null) {
        CloseUtil.close(temp);
    }
    SendPacket packet = packetTemp = takePacket();
    if (packet == null) {
        //Queue is empty, cancel sending status
        isSending.set(false);
        return;
    }

    total = packet.length();
    position = 0;

    sendCurrentPacket();
}

private void sendCurrentPacket() {
    ioArgs args = ioArgs;

    args.startWriting();//Set the pointer in the ioArgs buffer

    if (position >= total) {
        sendNextPacket();
        return;
    } else if (position == 0) {
        //First package, need to carry length information
        args.writeLength(total);
    }

    byte[] bytes = packetTemp.bytes();
    //Write the data of bytes to IoArgs
    int count = args.readFrom(bytes, position);
    position += count;

    //Finish packaging
    args.finishWriting();//flip() operation
    //Register OP_write with the channel and attach Args to runnable; when the selector thread is ready to listen, it can trigger the thread pool to send messages
    try {
        sender.sendAsync(args, ioArgsEventListener);
    } catch (IOException e) {
        closeAndNotify();
    }
}

private void closeAndNotify() {
    CloseUtil.close(this);
}

@Override
public void close(){
    if (isClosed.compareAndSet(false, true)) {
        isSending.set(false);
        SendPacket packet = packetTemp;
        if (packet != null) {
            packetTemp = null;
            CloseUtil.close(packet);
        }
    }
}

/**
 * Receive callback from writeHandler output thread
 */
private ioArgs.IoArgsEventListener ioArgsEventListener = new ioArgs.IoArgsEventListener() {
    @Override
    public void onStarted(ioArgs args) {

    }

    @Override
    public void onCompleted(ioArgs args) {
        //Continue to send the current package packetemp, because a package may not be sent out
        sendCurrentPacket();
    }
};

}

ReceiveDispatcher
Similarly, ReceiveDispatcher is also an interface, which is implemented in the code with AsyncReceiveDispatcher. An AsyncReceiveDispatcher object is referenced in the instance domain of the connector object. When receiving data, the received data will be unpacked through the method in ReceiveDispatcher. The general relationship is as follows:

There will be a four byte int field at the beginning of each message body, which represents the length value of the message and is read according to this length. If one ioArgs does not meet the length, the next ioArgs will be read to ensure the integrity of the data package. This process will not draw the program diagram, steal a lazy hhhh. In fact, the following code comments are clear and easy to understand.

The code for AsyncReceiveDispatcher is as follows:

package cn.buptleida.niohdl.impl.async;

import cn.buptleida.niohdl.box.StringReceivePacket;
import cn.buptleida.niohdl.core.ReceiveDispatcher;
import cn.buptleida.niohdl.core.ReceivePacket;
import cn.buptleida.niohdl.core.Receiver;
import cn.buptleida.niohdl.core.ioArgs;
import cn.buptleida.utils.CloseUtil;

import java.io.IOException;
import java.util.concurrent.atomic.AtomicBoolean;

public class AsyncReceiveDispatcher implements ReceiveDispatcher {

private final AtomicBoolean isClosed = new AtomicBoolean(false);
private final Receiver receiver;
private final ReceivePacketCallback callback;
private ioArgs args = new ioArgs();
private ReceivePacket packetTemp;
private byte[] buffer;
private int total;
private int position;

public AsyncReceiveDispatcher(Receiver receiver, ReceivePacketCallback callback) {
    this.receiver = receiver;
    this.receiver.setReceiveListener(ioArgsEventListener);
    this.callback = callback;
}

/**
 * connector This method is called in
 */
@Override
public void start() {
    registerReceive();
}

private void registerReceive() {

    try {
        receiver.receiveAsync(args);
    } catch (IOException e) {
        closeAndNotify();
    }
}

private void closeAndNotify() {
    CloseUtil.close(this);
}

@Override
public void stop() {

}

@Override
public void close() throws IOException {
    if(isClosed.compareAndSet(false,true)){
        ReceivePacket packet = packetTemp;
        if(packet!=null){
            packetTemp = null;
            CloseUtil.close(packet);
        }
    }
}

/**
 * Callback method, callback from readHandler input thread
 */
private ioArgs.IoArgsEventListener ioArgsEventListener = new ioArgs.IoArgsEventListener() {
    @Override
    public void onStarted(ioArgs args) {
        int receiveSize;
        if (packetTemp == null) {
            receiveSize = 4;
        } else {
            receiveSize = Math.min(total - position, args.capacity());
        }
        //Set accepted data size
        args.setLimit(receiveSize);
    }

    @Override
    public void onCompleted(ioArgs args) {
        assemblePacket(args);
        //Continue to accept the next data, because the same message may be separated in two IoArgs
        registerReceive();
    }
};

/**
 * Parsing data to packet
 * @param args
 */
private void assemblePacket(ioArgs args) {
    if (packetTemp == null) {
        int length = args.readLength();
        packetTemp = new StringReceivePacket(length);
        buffer = new byte[length];
        total = length;
        position = 0;
    }
    //Write the data in args into the external buffer
    int count = args.writeTo(buffer,0);
    if(count>0){
        //Store the data in the buffer of StringReceivePacket
        packetTemp.save(buffer,count);
        position+=count;
        
        if(position == total){
            completePacket();
            packetTemp = null;
        }
    }
    
}

private void completePacket() {
    ReceivePacket packet = this.packetTemp;
    CloseUtil.close(packet);
    callback.onReceivePacketCompleted(packet);
}

}
summary
In fact, there is no mystery in the solution of sticking package and half package, which is simply complicated. The core of the method is to customize a message body Packet, and complete the copy and conversion between byte array and buffer array in Packet. Of course, the assistance of position, limit and other pointers is very important.

Summing up this blog is also to sort out and record the work so far. I will continue to learn + practice through the project of smyl im. Because there are many scattered knowledge points in the previous learning process, which lie in my Youdao cloud notes, I feel that it is unnecessary to summarize them into a blog. The content of this blog is just a systematic thing, which can bring out the background of this project, and the subsequent blogs can derive and expand on this basis.

Original address https://www.cnblogs.com/buptleida/p/12732288.html

Topics: Java github network less