Blocking queues SynchronousQueue, LinkedTransferQueue, PriorityBlockingQueue, DelayQueue

Posted by mullz on Thu, 16 Dec 2021 20:15:15 +0100

Synchronous Queue

It has only one length that cannot be expanded or capacity set. Its implementation principle uses CAS+spin and is optimized to reach the threshold number of spins to enter the blocking state. It has two data structure fair modes First In, First Out Using Chain List First In, Last Out Using Stack Structure can be understood to be also a Chain List Only if the Chain List is pointing forward it can only add one at a time and also Only one read with nothing to say

LinkedTransferQueue is a combination of SynchronousQueue and LinkedBlockingQueue

It uses the same API that is optimized to spin a certain number of times before it enters the blocking API to achieve SynchronousQueue and LinkedBlockingQueue

Priority List PriorityBlockingQueue

Data structure is an array of binary heaps Binary heap based on a complete tree A complete tree is as complete as a full tree The last node is empty, that is, no last node is now 1, 2, 3 These three data 1 as the head node is now the top 2 under 1 The left 3 below 1 The right side so that the full tree now has 1, 2 Two data 1 is the head node 2 in 1 Here on the left is the complete tree that can also be a binary heap Binary heap and fork heap Large fork heap Small fork heap Now there are 1324 data fork heaps that are guaranteed header nodes Now there are 3 rows below 1 on the left 2 below 1 on the right 4 below 3 on the left from left to right. Note that it does not guarantee order (From small to large) It only guarantees that the smallest is always the big fork heap of the head node, or that the largest can see the put method written by a plum dog (or dog plum) with a very short implementation of a code that implements the big fork heap and the small fork heap

Delayed Queue

Delayed queues are also based on binary heap implementations, where an expiration time is passed in when data is added and then by default the small fork heap, which executes a short expiration time such as placing it in other direct blockages on the head node. A delayed implementation that wakes it up with a short expiration time is a dead loop that determines whether the expiration time minus the current time is less than 0 and then queues (Calling consumers to consume)

DelayQueue uses

DelayQueue<OrderInfo> queue = new DelayQueue<OrderInfo>();

Principle of DelayQueue
data structure

//Thread security for queue operations
private final transient ReentrantLock lock = new ReentrantLock();
// Priority queue, which stores elements to ensure priority execution with low latency
private final PriorityQueue<E> q = new PriorityQueue<E>();
// Used to mark if any threads are currently queued (only for element fetching) leader points to the first thread that gets elements blocked from the queue
private Thread leader = null;
// A condition used to indicate whether there are currently desirable elements to be notified when a new element arrives or when a new thread may need to become a leader
private final Condition available = lock.newCondition();

public DelayQueue() {}
public DelayQueue(Collection<? extends E> c) {
    this.addAll(c);

Entry put Method
public void put(E e) {
    offer(e);
}
public boolean offer(E e) {
    final ReentrantLock lock = this.lock;
    lock.lock();
    try {
        // Entry
        q.offer(e);
        if (q.peek() == e) {
            // If the queued element is at the head of the queue, the current element has the least latency
            // leader empty
            leader = null;
            // Available conditional queue goes to synchronous queue, ready to wake up threads blocked on available
            available.signal();
        }
        return true;
    } finally {
        lock.unlock(); // Unlock, really wake up blocked threads
    }
}
Queue take Method
public E take() throws InterruptedException {
    final ReentrantLock lock = this.lock;
    lock.lockInterruptibly();
    try {
        for (;;) {
            E first = q.peek();// Remove heap top elements   
            if (first == null)// If the top element of the heap is empty, there are no elements in the queue, blocking the wait directly
                available.await();
            else {
                long delay = first.getDelay(NANOSECONDS);// Expiration time of heap top element             
                if (delay <= 0)// If less than 0 indicates expiration, call poll() directly to pop up the top element
                return q.poll();
            
            // If delay is greater than 0, the following will block
            // Leave first empty for convenience gc
            first = null; 
            // If there are other threads waiting, enter the wait directly
            if (leader != null)
                available.await();
            else {
                // If leader is null, assign the current thread to it
                Thread thisThread = Thread.currentThread();
                leader = thisThread;
                try {
                    // Wait for delay time to wake up automatically
                    // After waking up, leave the leader empty and re-enter the loop to determine if the top element is due
                    // You don't necessarily get elements here even when you wake up
                    // Because it is possible that other threads first acquired the lock and ejected the heap top element
                    // The wake-up of a conditional lock is divided into two steps, leaving the Condition queue first
                    // Re-queue to AQS when another thread calls LockSupport. When unpark (t) really wakes up
                    available.awaitNanos(delay);
                } finally {
                    // leader is left empty if it is still the current thread, giving other threads a chance to get elements
                    if (leader == thisThread)
                        leader = null;
                }
            }
        }
    }
} finally {
    // After a successful exit, wake up the next waiting thread if the leader is empty and there are elements on the top of the heap
    if (leader == null && q.peek() != null)
        // Available conditional queue goes to synchronous queue, ready to wake up threads blocked on available
        available.signal();
    // Unlock, really wake up blocked threads
    lock.unlock();
}

ArrayBlockingQueue implements a bounded blocking queue based on an array structure

LinkedBlockingQueue A Bounded Blocking Queue Based on Chain List Structure

PriorityBlockingQueue supports unbounded blocking queues sorted by priority

DelayQueue Unbounded Blocking Queue Based on Priority BlockingQueue

SynchronousQueue does not store blocked queues for elements

LinkedTransferQueue An Unbounded Blocking Queue Based on Chain List Structure

LinkedBlockingDeque ue: A Double-Ended Blocking Queue Based on Chain List Structure

Topics: Java