synchronized and ReentrantLock locking
synchronized locking
public class Ticket implements Runnable { private int ticketNum = 1000; private static final Object obj = new Object(); @Override public void run() { while (true) { synchronized (obj) { if (ticketNum <= 0) { System.out.println("The tickets have been bought"); break; } else { ticketNum--; System.out.println(Thread.currentThread().getName()+"Sell one ticket and there's still one left"+ticketNum+"Ticket"); } } } } }
ReentrantLock lock
import java.util.concurrent.locks.ReentrantLock; public class Ticket implements Runnable { private int ticketNum = 1000; private final ReentrantLock lock = new ReentrantLock(); @Override public void run() { while (true) { try { lock.lock(); if (ticketNum <= 0) { System.out.println("The tickets have been bought"); break; } else { ticketNum--; System.out.println(Thread.currentThread().getName()+"Sell one ticket and there's still one left"+ticketNum+"Ticket"); } } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } } }
Deadlock problem
Multiple threads are blocked at the same time, and one or all of them are waiting for a resource to be released. Because the thread is blocked indefinitely, the program cannot terminate normally.
Four necessary conditions for java deadlock generation:
- Mutually exclusive use, that is, when a resource is used (occupied) by one thread, other threads cannot use it
- It cannot be preempted. The resource requester cannot forcibly seize resources from the resource owner, and resources can only be released by the resource owner.
- Request and hold, that is, when the resource requester requests other resources while maintaining the possession of the original resources.
- Cyclic waiting, that is, there is a waiting queue: P1 occupies P2 resources, P2 occupies P3 resources, and P3 occupies P1 resources. This forms a waiting loop.
Wait for wake-up mechanism
The wake-up waiting mechanism requires the methods in the Object:
- void wait() causes the current thread to wait until another thread calls the notify() method or notifyAll() method of the object.
- void notify() wakes up a single thread waiting for the object monitor (one at random).
- void notifyAll() wakes up all threads waiting for the object monitor.
The difference between the wait() method and the sleep() method
- sleep is from the Thread class, and wait is from the Object class
- The sleep method does not release the lock, while the wait method releases the lock so that other threads can use the synchronization control block or method. Sleep does not transfer system resources; Wait is to enter the thread waiting pool to wait, transfer system resources, and other threads can occupy CPU. Generally, there is no time limit for wait, because if the running resources of wait threads are insufficient, it is useless to come out again. It is necessary to wait for other threads to call notify/notifyAll to wake up all threads in the waiting pool before entering the ready queue and waiting for the OS to allocate system resources. sleep(milliseconds) can wake up automatically by specifying the time. If the time is less than, you can only call interrupt() to forcibly interrupt. Thread.Sleep(0) is used to "trigger the operating system to immediately re compete for CPU"
- Scope of use: wait, notify and notifyAll can only be used in the synchronization control method or synchronization control block and called by the lock object, while sleep can be used anywhere
synchronized(x){ x.notify() //Or x.wait() }
- sleep must catch exceptions, while wait, notify and notifyAll do not
Blocking queue
There is an interface Queue (Queue) under the Collection. One implementation of Queue is blocking Queue BlockingQueue. BlockingQueue has two implementation classes:
- ArrayBlockingQueue: a queue in the form of an array
- LinkedBlockingQueue: a queue in the form of a linked list
Constructor:
ArrayBlockingQueue(int capacity) creates an ArrayBlockingQueue with a given (fixed) capacity and a default access policy.
Common methods:
void put(E e)
Insert the specified element at the end of the queue. If the queue is full, the waiting space becomes available.
E take()
Retrieve and delete the header of this queue and wait for the element to become available if necessary.
It can be seen that both methods have a waiting mechanism. If you want to know more, you can look at the source code
Data synchronization
volatile keyword
In multi-threaded, there should be shared data int a = 10, and then two threads a and B share this data. When two threads a and B run at the same time, they will copy a value to the cache; Then a will a = 9, and a in B is still 10.
At this time, you can use the volatile (volatile int a = 10) keyword to force you to retrieve the latest value of a in memory every time you use a.
synchronized locking
Of course, the above problems can also be solved by using synchronized locking, because synchronized also forces you to retrieve the latest value of a in memory every time you use a.
Atomicity
Atomicity of multithreading: in one or more operations, either all tasks are completed or all tasks are not executed.
For example, if two people eat 100 steamed stuffed buns, if atomicity cannot be guaranteed, unexpected problems may occur.
volatile keyword can only ensure that the data is up-to-date, but it cannot guarantee atomicity, because the operation may be interrupted after obtaining the latest data;
synchronized ensures atomicity because it ensures that all tasks are executed in one operation.
Tool classes in atomic
In Java util. concurrent. Atomic package provides us with a set of tool classes that can ensure the atomicity of basic data.
These tool classes ensure the atomicity of data through CAS algorithm and spin lock, that is:
- Each time the data is read, the old value of the data will be saved;
- Then, when modifying the data, compare the old value with the value in memory. If the old value is equal to the value in memory, change it and obtain the latest value;
- If the old value is not equal to the value in memory, the latest value is not modified and obtained.
This comparison method is called CAS algorithm; Spin operation will be carried out every time they are not equal (that is, get the latest value, compare it, and a cycle)
The difference between pessimistic lock (synchronized) and optimistic lock (CAS algorithm)
Pessimistic lock: when a thread uses a resource, other threads can only wait for the resource to be released. Slow speed
Optimistic lock: when a thread uses a resource, if it is not locked, other threads can still use the resource. Faster speed
Concurrent tool class
Hashtable
HashMap is not thread safe. In a multithreaded environment, Hashtable is generally used.
Hashtable is thread safe, but inefficient, because each operation will lock the entire array.
ConcurrentHashMap
In order to improve the efficiency of Hashtable, ConcurrentHashMap appears in jdk1 Before 7, another array was stored in each element of the ConcurrentHashMap array. During each operation, one of the elements was locked and synchronized before operating the array; after JDK1.8, each element in the array was operated with synchronized+CAS, which is more efficient.
CountDownLatch
Synchronous assistance that allows one or more threads to wait until a set of operations performed in other threads is completed
Common methods:
void await()
Causes the current thread to wait until the latch count reaches zero unless the thread is interrupted.
void countDown()
Reduce the count of latches. If the count reaches zero, release all waiting threads.
long getCount()
Returns the current count.
Code example:
import java.util.concurrent.CountDownLatch; public class MyThread2 extends Thread { private final CountDownLatch countDownLatch; public MyThread2(CountDownLatch countDownLatch) { this.countDownLatch = countDownLatch; } @Override public void run() { System.out.println(Thread.currentThread().getName()+"I'm waiting for execution with thread 1 to complete"); try { countDownLatch.await(); } catch (Exception e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName()+"Thread 1 execution is complete, and so am I"); } }
import java.util.concurrent.CountDownLatch; public class MyThread1 extends Thread { private final CountDownLatch countDownLatch; public MyThread1(CountDownLatch countDownLatch) { this.countDownLatch = countDownLatch; } @Override public void run() { countDownLatch.countDown(); System.out.println(Thread.currentThread().getName()+"Thread execution complete"); } }
public static void main(String[] args) { CountDownLatch countDownLatch = new CountDownLatch(1); Thread thread1 = new MyThread1(countDownLatch); Thread thread2 = new MyThread2(countDownLatch); thread1.start(); thread2.start(); }
Semaphore
A count semaphore. Conceptually, semaphores maintain a set of licenses. If necessary, each acquire() blocks until a license is available before it can be used.
In short, each program needs to obtain a license to execute.
Common methods:
void acquire()
Obtain a license from the semaphore, block until available, or the thread is interrupted.
void release()
Release the license and return it to the semaphore.
import java.util.concurrent.Semaphore; public class MyThread1 extends Thread { private final Semaphore semaphore = new Semaphore(2); @Override public void run() { try { // Get a pass semaphore.acquire(); // Execute code System.out.println(Thread.currentThread().getName()+"I'm doing it"); // Return the pass semaphore.release(); } catch (Exception e) { e.printStackTrace(); } } }