Problems with locks in threads

Posted by pmjm1 on Thu, 09 Dec 2021 12:39:52 +0100

1, Context switching of threads

In the computer, the cpu executes tasks by allocating time slices. When the time slice of a task runs out, it will save the current state and then switch to the next task. When it switches back next time, it will load the current state and continue to execute ----------- the process from task saving to loading is a context switching

  • Premise of switching
    The kernel of a cpu can only run instructions in one thread at a time

  • Question 1: how does the thread remember the last state after switching back
    There is a thing in the thread called the program counter (each thread will exist). The thread records the last execution times of the thread through the program counter, so as to achieve the recording state

  • Question 2: thread execution will switch at any time. How to ensure that important instructions can be completed completely?
    The essence of this problem is thread safety

  • Problem 3: performance degradation during context switching by cpu

  1. Generally, there is little need to pay attention to the performance consumption caused by CPU context.
  2. Too many context switches (such as IO intensive systems) will cause the CPU to spend more time on the saving and recovery of data such as virtual memory, reduce the real running time of the process, and reduce the system performance.

2, Thread safety (synchronization) issues

  • In the process of cpu switching between multiple threads, there may be the problem of resource contention between threads, resulting in some instructions can not be executed completely and data problems

There are three conditions for thread safety problems:

  • There are multiple threads
  • At the same time
  • The thread executes the same instruction or modifies the same variable
package com.hopu.Thread;

import java.util.Random;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
/**
 * Cases of bank transfer
 */
public class bank{
    //Simulate 100 bank accounts
    private int[] accounts = new int[100];
//    Lock lock = new ReentrantLock();
    Object lock  = new Object();
    //Initialize account
    {
        for (int i = 0; i <accounts.length ; i++) {
            accounts[i] = 10000;
        }
    }

    /**
     * Simulated transfer
     */
    public synchronized void transfer(int in , int out ,int money){
        if(accounts[out] < money){
            new RuntimeException("Insufficient balance, please recharge in time");
        }
//        lock.lock();
        synchronized (lock) {

            accounts[out] -= money;
            System.out.println(out + "Transfer out" + money);
            accounts[in] += money;
            System.out.println(in + "to change into" + money);
            System.out.println("The total amount of the bank is" + total());
        }
    }

    public int total(){
        int sum = 0;
        for (int i = 0; i <accounts.length ; i++) {
            sum += accounts[i];
        }
        return sum;
    }

    public static void main(String[] args) {
        bank bank = new bank();
        Random random = new Random();
        for (int i = 0; i <50 ; i++) {
            new Thread(()->{
                int out = random.nextInt(100);
                int in = random.nextInt(100);
                int money  = random.nextInt(1000);
                bank.transfer(in,out,money);
            }).start();
        }
    }
}

Solution to thread safety problem

Solution: lock the program, let it execute a whole section of instructions, and then release the lock for other programs to execute

Several locking methods

  1. Synchronization method
    Add the synchronized keyword to the method to lock the entire method
 /**
     * Simulated transfer
     */
    public synchronized void transfer(int in , int out ,int money){
        if(accounts[out] < money){
            new RuntimeException("Insufficient balance, please recharge in time");
        }
            accounts[out] -= money;
            System.out.println(out + "Transfer out" + money);
            accounts[in] += money;
            System.out.println(in + "to change into" + money);
            System.out.println("The total amount of the bank is" + total());
    }
  1. Synchronous code block
    The granularity is smaller than the synchronization method. The smaller the granularity, the more flexible and higher the performance
    Method: lock a piece of code
synchronized(Lock object){
	code
}
 //Synchronous code block
        synchronized (lock) {
            accounts[from] -= money;
            System.out.printf("from%d Transfer out%d%n", from, money);
            accounts[to] += money;
            System.out.printf("towards%d to change into%d%n",to,money);
            System.out.println("The bank general ledger is:" + getTotal());
        }
Lock object:
	You can control the current thread, such as: wait Waiting notify Notice;
	Any object can be used as a lock, and the object cannot be a local variable
	Non static method --> this
	Static method --->  Current class.class
  1. Synchronous lock
usage method:

1. Define synchronization lock object (member variable)
2. Lock
3. Release lock

//Member variable
Lock lock = new ReentrantLock();

//Method internal locking
lock.lock();
try{
	code...
}finally{
	//Release lock
	lock.unlock();
}
	Lock lock = new ReentrantLock();
    /**
     * Simulated transfer
     */
    public synchronized void transfer(int in , int out ,int money){
        if(accounts[out] < money){
            new RuntimeException("Insufficient balance, please recharge in time");
        }
        lock.lock();
        try {
            accounts[out] -= money;
            System.out.println(out + "Transfer out" + money);
            accounts[in] += money;
            System.out.println(in + "to change into" + money);
            System.out.println("The total amount of the bank is" + total());
        }finally {
            lock.unlock();
        }
    }

Lock interface

Basic method:

  • Lock
  • unlock() releases the lock

Common implementation classes

  • ReentrantLock reentrant lock
  • WriteLock write lock
  • ReadLock read lock
  • ReadWriteLock read write lock

Basic principle of synchronized:

Once the code is synchronized, the jvm starts a monitor to monitor the instruction
When the thread executes the fragment, the monitor will first judge whether the lock object is held by other threads. If it has been held by other threads, the current thread will enter the waiting mode and cannot execute,
If the lock is not held by another thread, the current thread holds the lock and executes the code segment
Implementation of underlying assembly:

monitorenter
....
monitorexit

Comparison of three locks

- granularity

  Synchronous code block/Synchronous lock < Synchronization method

- Easy programming

  Synchronization method > Synchronous code block > Synchronous lock

- performance

  Synchronous lock > Synchronous code block > Synchronization method

- Functionality/flexibility

  Synchronous lock (there are more methods and conditions can be added) > Synchronous code block (conditional) > Synchronization method

Pessimistic lock and optimistic lock

Pessimistic lock

It is always assumed that in the worst case, each time you go to get the data, you think others will modify it, so each time you get the data, you will lock it, so that others will block the data until they get the lock (shared resources are only used by one thread at a time, blocked by other threads, and then transfer the resources to other threads after they are used up). Many such locking mechanisms are used in traditional relational databases, such as row lock, table lock, read lock and write lock, which are locked before operation. Exclusive locks such as synchronized and ReentrantLock in Java are the implementation of pessimistic lock.

Optimistic lock

Always assume the best situation. Every time you get the data, you think others will not modify it, so it will not be locked. However, when updating, you will judge whether others have updated the data during this period. You can use the version number mechanism and CAS algorithm. Optimistic locking is applicable to multi read applications, which can improve throughput, such as the one provided by the database, which is similar to write_ The condition mechanism actually provides optimistic locks. In Java, Java util. concurrent. The atomic variable class under the atomic package is implemented by CAS, an implementation of optimistic locking.

Usage scenarios of two locks

  • Pessimistic locks are more heavyweight, occupy more resources, and the application thread competition is more frequent, which is a scenario of writing more and reading less
  • Optimistic lock is more lightweight and has higher performance. It is used in scenarios where there is less thread competition and more reads and less writes

Interesting little examples

public class AtomicDemo {

    static int count = 0;

    public static void main(String[] args) {
        for (int i = 0; i < 100000; i++) {
            new Thread(() ->{
                count++;
            }).start();
        }
        System.out.println(count);
    }
}

After executing the above code, you will find that the final result output is not 100000, which is always worse than 100000. Where is the missing data

We can decompose the count + + operation

  1. Read the value of count from memory
  2. Execute count + 1 operation
  3. Assign the calculation result to count
    Since we did not carry out special operations on this process, the atomic structure of all this process was destroyed. For example:
    Thread A reads the count value 10, adds 1 to get 11, and is ready to assign it to count; Thread B reads that the count is also 10, adds 1 to get 11, and assigns A value to count of 11;
    Switch back to thread A and assign count to 11.

This special operation is our solution:

1. Pessimistic lock, using synchronization method, synchronization block and synchronization lock
2. Optimistic lock
   Use atomic integers

Atomic class

AtomicInteger class

Atomic integer, the bottom layer uses CAS algorithm to realize integer increment and decrement operations

Common methods:
- incrementAndGet Atomic increment
- decrementAndGet Atomic decrement
public class AtomicDemo {

    static int count = 0;

    static AtomicInteger integer = new AtomicInteger(0);

    public static void main(String[] args) {
        for (int i = 0; i < 10000; i++) {
            new Thread(() ->{
                count++;
                //Increasing
                integer.incrementAndGet();
            }).start();
        }
        try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println("count:"+count);
        System.out.println("atomic:"+integer.get());
    }
}

Problems of CAS algorithm

1. ABA problem:
* What is? CAS:For a value in memory V,Provide an old value A And a new value B. If the old value provided V and A Equal to B write in V. This process is atomic.
CAS The execution result is either successful or failed. In case of failure, the next shift adopts continuous retry. Or give up.
* ABA problem:If another thread modifies V The value assumption turns out to be A,First modify to B,Then modify back to A. Current thread CAS The current operation cannot be distinguished V Whether the value has changed.
for instance:
	When you are very thirsty, you find a cup full of water and you drink it up. Then refill the glass with water.
	Then you leave. When the real owner of the cup comes back, he sees that the cup is still full of water. Of course, he doesn't know whether it has been drunk and refilled.

2. If the expected value is inconsistent with the actual value, it is in a circular waiting state CPU The consumption of is relatively large

test

Write the lazy singleton mode, create 100 threads, and each thread obtains a singleton object to see if there is a problem (print the hashCode of the object to see if it is the same)

Topics: Java Back-end