Various locks
Optimistic lock: read more and write less. Always assume the best situation. Every time you go to get the data, you think others will not modify it, so you won't lock it. However, when updating, you will judge whether others have updated the data during this period. You can use the version number mechanism and CAS algorithm.
Pessimistic lock: if you write too much, you always assume the worst case. Every time you get the data, you think others will modify it, so you will lock it every time you get the data. The pessimistic lock in java is synchronized. AQS tries CAS to obtain the lock first, and it is converted into a pessimistic lock when it cannot be obtained
Fair lock: multiple threads obtain locks in the order of applying for locks. Threads will directly enter the queue to queue. They will always be the first in the queue to get locks
Unfair lock: when multiple threads acquire a lock, they will directly try to obtain it. If they cannot obtain it, they will enter the waiting queue. If they can obtain it, they will directly obtain the lock
Exclusive lock: exclusive lock is also called exclusive lock, which means that the lock can only be held by one thread at a time. When thread T adds an exclusive lock to data D, other threads cannot add any type of lock to D. the exclusive lock can read and write.
Shared lock: a shared lock means that the lock can be held by multiple threads. If thread T adds a shared lock to data D, other threads can only add a shared lock to D, not an exclusive lock. The thread holding the shared lock can only read data and cannot write data.
Reentrant lock: when a thread holds a lock, it can apply for the lock again (multiple times). It can be understood as an identification of a lock. This sign has counter function. The initial value of the ID is 0, which means that the current lock is not held by any thread. Every time a thread obtains a reentrant lock, the counter for that lock is incremented by 1. Every time a thread releases the lock, the counter of the lock is decremented by 1. Until the counter is 0, other threads can get it again
Spin lock: spin is a lock optimization mechanism. If the thread holding the lock can release the lock resources in a very short time, those threads waiting for the competing lock do not need to switch between the kernel state and the user state [1] and enter the blocking pending state. They only need to wait (self spin). After the thread holding the lock releases the lock, they can obtain the lock immediately, This avoids the consumption of switching between user thread and kernel. However, spin will occupy cpu resources. Once the competition is fierce or the thread holding the lock needs to occupy the lock for a long time, it will lead to performance degradation
[1] Thread scheduling runs in kernel mode, while the code in the thread runs in user mode
Blocking lock: let the thread enter the blocking state to wait. When the corresponding signal (wake-up, time) is obtained, it can enter the ready state of the thread. All threads in the ready state enter the running state through competition, which does not occupy cpu resources, but the time required to switch the state is longer than spin
Biased lock: the thread that is biased toward the first access lock. If there is only one thread accessing the synchronization lock and there is no multi-threaded competition, a biased lock will be added to the thread. If other threads preempt the lock, the lock will be restored to a lightweight lock.
Lightweight lock: when a thread enters the synchronization block, the biased lock will be upgraded to a lightweight lock when the second thread joins the lock contention. The lightweight lock uses CAS operation to try to update the Mark Word of the object to the pointer pointing to the Lock Record (located in the thread stack). If the update is successful, the current thread obtains the lock. If it fails, the current thread attempts to obtain the lock by using spin. If the spin reaches the threshold and has not obtained the lock, it will be upgraded to heavyweight lock.
Heavyweight lock: heavyweight lock is realized by relying on the monitor inside the object, and the monitor depends on the mutex of the operating system. The cost of this synchronization method is very high, including kernel state and user state switching caused by system call, thread switching caused by thread blocking, etc
CAS operation (compare and swap): compare and replace. Known as lock free optimization, the main three operands
V (memory address) A (old expected value) B (new value) change the value of V to B only when the value of V is equal to that of A, otherwise it fails,
It is usually combined with spin to try again and again. CAS operation will cause ABA problems [1].
CAS operation is supported by cpu instructions, that is, it is an atomic operation and cannot be interrupted.
[1]ABA: thread 1 is going to use CAS to replace the value of the variable from a to B. before that, thread 2 replaces the value of the variable from a to C and from C to A. then thread 1 finds that the value of the variable is still a when executing CAS, so CAS succeeds
The bottom layer of java is compareAndSwapInt under the Unsafe class called
Atomic class
java. util. concurrent. AtomicInteger, AtomicLong, AtomicBoolean, etc. under atomic package
AtomicInteger class mainly guarantees atomic operation through the native methods of CAS, volatile and Unsafe, so as to avoid the high overhead of Synchronized.
Compare several thread safe self incrementing
static long count = 0; static AtomicLong count2 = new AtomicLong(0); static LongAdder count3 = new LongAdder(); static final Object lock = new Object(); public static void main(String[] args) throws InterruptedException { ExecutorService executorService = Executors.newFixedThreadPool(1000); // hutool tool class StopWatch stopWatch = new StopWatch(); // synchronized CountDownLatch latch = new CountDownLatch(1000); stopWatch.start("synchronized"); for (int i = 1; i <= 1000; i++) { executorService.submit(() -> { for (int j = 1; j <= 10000; j++) { synchronized (lock){ count++; } } latch.countDown(); }); } latch.await(); stopWatch.stop(); // atomicLong CountDownLatch latch2 = new CountDownLatch(1000); stopWatch.start("atomicLong"); for (int i = 1; i <= 1000; i++) { executorService.submit(() -> { for (int j = 1; j <= 10000; j++) { count2.incrementAndGet(); } latch2.countDown(); }); } latch2.await(); stopWatch.stop(); // longAdder CountDownLatch latch3 = new CountDownLatch(1000); stopWatch.start("longAdder"); for (int i = 1; i <= 1000; i++) { executorService.submit(() -> { for (int j = 1; j <= 10000; j++) { count3.increment(); } latch3.countDown(); }); } latch3.await(); stopWatch.stop(); System.out.println(count + " " + count2 + " " + count3); System.out.println(stopWatch.prettyPrint()); executorService.shutdownNow(); }
count=10000000 count2=10000000 count3=10000000 StopWatch '': running time = 918428800 ns --------------------------------------------- ns % Task name --------------------------------------------- 677230400 074% synchronized 106811300 012% atomicLong 134387100 015% longAdder
The results show that the efficiency of longAdder is the highest when the number of online processes and cycles are large enough, followed by atomicLong and synchronized. When reducing the number of threads or loops, longAdder and atomicLong take about the same time, and synchronized is the slowest.