ByteBuf is responsible for memory allocation and release by specific implementations, while abstract types only define the timing of memory allocation and release.
Memory allocation is divided into two stages: the first stage, allocating memory at initialization. Stage 2: Allocate new memory when there is not enough memory. The ByteBuf abstraction layer does not define the behavior of the first stage, but defines the method of the second stage:
public abstract ByteBuf capacity(int newCapacity)
This method is responsible for allocating a new memory length of new Capacity.
The Abstract implementation of memory release is implemented in AbstractReference CountedByteBuf. This class implements reference counting, which is called when the release method is called to change the reference count to zero.
protected abstract void deallocate()
Execute real memory release operations.
Memory-related properties
ByteBuf defines two memory-related properties:
Capacity: Current capacity, that is, the size of memory currently in use. Using capacity() method.
maxCapacity: Maximum capacity, which is the maximum memory size available. New Capacity () method is used to obtain.
Memory allocation timing
AbstractByteBuf defines the timing of memory allocation. When the writeXX method is called, if it finds that there is insufficient writable space, it calls capacity to allocate new memory. Let's take writeInt as an example to analyze this process in detail.
1 @Override 2 public ByteBuf writeInt(int value) { 3 ensureWritable0(4); 4 _setInt(writerIndex, value); 5 writerIndex += 4; 6 return this; 7 } 8 9 10 final void ensureWritable0(int minWritableBytes) { 11 ensureAccessible(); 12 if (minWritableBytes <= writableBytes()) { 13 return; 14 } 15 16 if (minWritableBytes > maxCapacity - writerIndex) { 17 throw new IndexOutOfBoundsException(String.format( 18 "writerIndex(%d) + minWritableBytes(%d) exceeds maxCapacity(%d): %s", 19 writerIndex, minWritableBytes, maxCapacity, this)); 20 } 21 22 // Normalize the current capacity to the power of 2. 23 int newCapacity = calculateNewCapacity(writerIndex + minWritableBytes); 24 25 // Adjust to the new capacity. 26 capacity(newCapacity); 27 } 28
Line 3: ensure Writable to ensure that the current data can be written to 4Byte.
Line 11: Make sure the current ByteBuf is accessible. Prevent ByteBuf memory from reading and writing data after being released in multi-threaded environment.
Line 12, 13: If you have enough memory, just leave it at that.
Line 16: Throw an exception if memory is larger than maxCapacity.
Lines 23, 26: Calculate the size of the new memory and call capacity(int) to allocate the new memory.
An important step before reallocating memory is to calculate the size of new memory. This work is done by the calculateNewCapacity method, whose code is as follows:
1 private int calculateNewCapacity(int minNewCapacity) { 2 final int maxCapacity = this.maxCapacity; 3 final int threshold = 1048576 * 4; // 4 MiB page 4 5 if (minNewCapacity == threshold) { 6 return threshold; 7 } 8 9 // If over threshold, do not double but just increase by threshold. 10 if (minNewCapacity > threshold) { 11 int newCapacity = minNewCapacity / threshold * threshold; 12 if (newCapacity > maxCapacity - threshold) { 13 newCapacity = maxCapacity; 14 } else { 15 newCapacity += threshold; 16 } 17 return newCapacity; 18 } 19 20 // Not over threshold. Double up to 4 MiB, starting from 64. 21 int newCapacity = 64; 22 while (newCapacity < minNewCapacity) { 23 newCapacity <<= 1; 24 } 25 26 return Math.min(newCapacity, maxCapacity); 27 }
Line 1: Accept a minimum new memory parameter, minNewCapacity.
Line 3: Define a threshold constant of 4MB.
Line 5, 6: If minNewCapacity==threshold, then the new memory size is threshold.
Lines 10-17: If minNewCapacity > threshold, the new memory size is min(maxCapacity, threshold * n) and threshold * n >= minNewCapacity.
Lines 21-26: If minNewCapacity < threshold, the new memory size is min(maxCapacity, 64 * 2n) and 64 * 2n >= minNewCapacity.
Implementation of Memory Allocation and Release
Specific implementations of memory allocation and release in this chapter involve only ByteBuf of unpooled type, i.e.
UnpooledHeapByteBuf
UnpooledDirectByteBuf
UnpooledUnsafeHeapByteBuf
UnpooledUnsafeDirectByteBuf
The memory allocation and release codes involved in these specific implementations are relatively concise and easy to understand the principles of ByteBuf memory management. By contrast, the pooled ByteBuf memory allocation and release code is much more complex and will be analyzed independently in later chapters.
Unpooled HeapByteBuf and Unpooled Unsafe HeapByteBuf implementations
In Unpooled HeapByteBuf, the implementation code of memory allocation is mainly concentrated in capacity(int) and allocateArray() methods. The steps for capacity to allocate new memory are
- Call allocateArray to allocate a new piece of memory.
- Copy the actual old memory into the new memory.
- Replace old memory with new memory (24 rows).
- Release old memory (25 lines).
The code is as follows:
1 @Override 2 public ByteBuf capacity(int newCapacity) { 3 checkNewCapacity(newCapacity); 4 5 int oldCapacity = array.length; 6 byte[] oldArray = array; 7 if (newCapacity > oldCapacity) { 8 byte[] newArray = allocateArray(newCapacity); 9 System.arraycopy(oldArray, 0, newArray, 0, oldArray.length); 10 setArray(newArray); 11 freeArray(oldArray); 12 } else if (newCapacity < oldCapacity) { 13 byte[] newArray = allocateArray(newCapacity); 14 int readerIndex = readerIndex(); 15 if (readerIndex < newCapacity) { 16 int writerIndex = writerIndex(); 17 if (writerIndex > newCapacity) { 18 writerIndex(writerIndex = newCapacity); 19 } 20 System.arraycopy(oldArray, readerIndex, newArray, readerIndex, writerIndex - readerIndex); 21 } else { 22 setIndex(newCapacity, newCapacity); 23 } 24 setArray(newArray); 25 freeArray(oldArray); 26 } 27 return this; 28 } 29
There are two cases when old memory data is copied into new memory in capacity (new capacity, old Capacity is the size of new and old memory respectively):
- New Capacity > old Capacity, which is relatively simple, just copy it directly (line 8). It does not affect reader Index and writer Index.
- New capacity < old capacity, this situation is more complex. capacity tries to copy readable data into new memory.
- If reader Index < New Capacity and writer Index < New Capacity. Readable data is transferred to new memory. readerIndex and writerIndex remain unchanged.
- If reader Index < New Capacity and write Index > New Capacity. End-to-end data will be partially transferred to the new memory and will lose part of the readable data. readerIndex remains unchanged, and writerIndex becomes new Capacity.
- If reader Index > New Capacity, all data is lost, reader Index and writer Index will become new Capacity.
The allocateArray method is responsible for allocating a new memory, and its implementation is new byte []. The free Array method is responsible for releasing memory, which is an empty method.
Unpooled Unsafe HeapByteBuf is a direct subclass of Unlooped HeadpByteBuf. The difference in memory management is the implementation of allocateArray. The implementation of Unpooled Unsafe HeapByteBuf is as follows:
1 @Override 2 byte[] allocateArray(int initialCapacity) { 3 return PlatformDependent.allocateUninitializedArray(initialCapacity); 4 }
Unpooled DirectByteBuf and Unpooled Unsafe DirectByteBuf implementations
The Unpooled DirectByteBuf class uses DirectByteBuffer as memory and uses DirectByteBuffer's capabilities to implement the ByteBuf interface. The allocateDirect and freeDirect methods are responsible for allocating and releasing DirectByteBuffer. The capacity(int) method is similar to UnloopedHeapByteBuf, which uses allocateDirect to create a new DirectByteBuffer, copy the old memory data into the new memory, replace the old memory with the new memory, and finally call the freeDirect method to release the old DirectByteBuffer.
1 protected ByteBuffer allocateDirect(int initialCapacity) { 2 return ByteBuffer.allocateDirect(initialCapacity); 3 } 4 5 protected void freeDirect(ByteBuffer buffer) { 6 PlatformDependent.freeDirectBuffer(buffer); 7 } 8 9 @Override 10 public ByteBuf capacity(int newCapacity) { 11 checkNewCapacity(newCapacity); 12 13 int readerIndex = readerIndex(); 14 int writerIndex = writerIndex(); 15 16 int oldCapacity = capacity; 17 if (newCapacity > oldCapacity) { 18 ByteBuffer oldBuffer = buffer; 19 ByteBuffer newBuffer = allocateDirect(newCapacity); 20 oldBuffer.position(0).limit(oldBuffer.capacity()); 21 newBuffer.position(0).limit(oldBuffer.capacity()); 22 newBuffer.put(oldBuffer); 23 newBuffer.clear(); 24 setByteBuffer(newBuffer); 25 } else if (newCapacity < oldCapacity) { 26 ByteBuffer oldBuffer = buffer; 27 ByteBuffer newBuffer = allocateDirect(newCapacity); 28 if (readerIndex < newCapacity) { 29 if (writerIndex > newCapacity) { 30 writerIndex(writerIndex = newCapacity); 31 } 32 oldBuffer.position(readerIndex).limit(writerIndex); 33 newBuffer.position(readerIndex).limit(writerIndex); 34 newBuffer.put(oldBuffer); 35 newBuffer.clear(); 36 } else { 37 setIndex(newCapacity, newCapacity); 38 } 39 setByteBuffer(newBuffer); 40 } 41 return this; 42 }
Comparing UnloopedHeapByteBuf's capacity(int) method, we find that the two implementations are very similar and can be handled in two cases:
- Lines 18-24, new Capacity > old Capacity.
- Lines 26-39, new capacity < old Capacity.
The effect of the two cases on reader index and writer index is the same, but the difference is the concrete implementation of data replication. _______
Unpooled Unsafe DirectByteBuf and Unpooled DirectByteBuf belong to the derived class of AbstractReference CountedByteBuf, and there is no inheritance relationship between them. But memory allocation and release implementations are the same. The difference is memory I/O. Unpooled Unsafe DirectByteBuf uses the memory between Unsafe ByteBufUtil classes to read and write DirectByteBuffer, but does not use the I/O capability of DirectByteBuffer.