Problems of concurrent programming

Posted by aa-true-vicious on Sat, 22 Jan 2022 17:38:25 +0100

The purpose of concurrent programming is to make the program run faster, but it is not to start more threads to maximize the concurrent execution of the program. In concurrent programming, if the program runs faster through multithreading, it will face many problems, such as context switching, deadlock, and resource problems limited by hardware and software. This paper mainly discusses the following context switching and deadlock problems.

Context switching problem:

On a single core processor, if you want to support multi-threaded code execution, the CPU will allocate CPU time slices to each thread. Time slice is the time allocated by the CPU to each thread. Because the time slice is very short, usually tens of milliseconds (ms), the CPU makes us feel that multiple threads are executed at the same time by constantly switching the execution of threads.

The CPU circularly executes tasks through the time slice allocation algorithm. After the current task executes a time slice, it will switch to the next task. Before switching, it will save the state of the previous task, so that it can be loaded into the state of the task when switching back to the task next time. We can understand the process from saving to reloading a task as a context switch.

Here we consider a question: must multithreading be fast? Of course, the answer is No. within a certain number of cumulative operations, the speed of concurrent execution is slower than that of serial execution, because of the overhead of thread creation and context switching. Next, let's analyze the following through a case:

package com.mybatis.test;

import org.apache.log4j.Logger;
/**
 * @ClassName ConcurrencyAndSerialTest
 * @Description Concurrent and serial operation test
 * @Author chengjunyu
 * @Date 2022/1/22 10:33
 * @Version 1.0
 */
public class ConcurrencyAndSerialTest {

    private static final long count = 1000000000L;

    public static void main(String[] args) throws InterruptedException {
        concurrency();
        serial();
    }

    private static void concurrency() throws InterruptedException {
        long startTime = System.currentTimeMillis();
        Thread thread = new Thread(new Runnable() {
            @Override
            public void run() {
                int a = 0;
                for(int i = 0; i < 10000; i++) {
                    a += 2;
                }
            }
        });
        thread.start();
        int b = 0;
        for(long i=0; i<count; i++) {
            b--;
        }
        thread.join();
        long time = System.currentTimeMillis() - startTime;
        System.out.println("Concurrent execution" + count + "Times, time consuming" + time + "ms");
    }

    private static void serial() {
        long startTime = System.currentTimeMillis();
        int a = 0;
        for(long i = 0; i < count; i ++) {
            a += 2;
        }
        int b = 0;
        for(long i = 0; i < count; i++) {
            b--;
        }
        long time = System.currentTimeMillis() - startTime;
        System.out.println("Serial execution" + count + "Times, time consuming" + time + "ms");
    }
}

In the code, I define the operation times as 10000, 100000, 1000000, 10000000, 100000000 and 1000000000 respectively, and the following results are obtained:

Number of cycles

Serial time consuming

Concurrent time consuming

Time consuming comparison

10000 times

0ms

1ms

Serial is faster than concurrent

100000 times

2ms

3ms

Serial is faster than concurrent

1 million times

4ms

4ms

Serial and concurrent are the same

10 million times

10ms

7ms

Serial is slower than concurrent

100 million times

68ms

34ms

Serial is twice as slow as concurrent

1 billion times

351ms

665ms

Serial is twice as slow as concurrent

After comparing the number of operations and operation time between concurrent execution and serial execution, we think about how to reduce context switching.

The common methods to reduce context switching include CAS algorithm, lock free concurrent programming, using the least threads and using CO procedures.

CAS algorithm: Compare And Swap. The Atomic package of Java uses CAS algorithm to update data without locking;

Lock free concurrent programming: when multithreading competes for locks, it will cause context switching. Therefore, when processing multithreaded data, some methods can be used to avoid using locks, such as segmenting the data ID according to the Hash algorithm, and different threads process different segments of data;

Use the least threads: avoid creating unnecessary threads, which is relatively simple and easy to understand. If a large number of threads are created to process when there are few tasks, a large number of threads will be in a waiting state;

Co process: realize multi task scheduling in a single thread, and maintain the switching between multiple tasks in a single thread.

Deadlock:

Deadlock refers to a blocking phenomenon caused by two or more processes competing for resources or communicating with each other in the execution process. If there is no external force, they will not be able to move forward. At this time, it is said that the system is in a deadlock state or the system has a deadlock. These processes that are always waiting for each other are called deadlock processes.

Deadlock generation conditions:

1. Mutually exclusive conditions:

It refers to the exclusive use of the allocated resources by the process, that is, a resource is occupied by only one process in a period of time. If there are other processes requesting resources at this time, the requester can only wait until the process occupying the resources is released.

2. Request and hold conditions

It refers to that a process has maintained at least one resource, but has made a new resource request, and the resource has been occupied by other processes. At this time, the requesting process is blocked, but it does not let go of other resources it has obtained.

3. Inalienable conditions

It refers to the resources obtained by the process. Before they are used up, they cannot be deprived and can only be released by themselves when they are used up.

4. Loop waiting condition

When a deadlock occurs, there must be a loop composed of two or more processes in the system. Each process on the loop is waiting for the resources occupied by the next process.

After understanding the conditions for deadlock generation, we can consider how to prevent and avoid deadlock:

Deadlock prevention:

One of the four conditions for ring breaking deadlock. Normally, the mutual disassembly condition cannot be destroyed, so start from the remaining three conditions.

Destruction request and retention conditions:

method:

The pre static allocation method is adopted, that is, the process applies for all the resources it needs before running, and does not put it into operation until its resources are met. Once put into operation, these resources will always belong to it, and no other resource requests will be made, so as to ensure that the system will not deadlock.

Disadvantages:

System resources are seriously wasted, some of which may be used only at the beginning or near the end of operation, or even not at all. It will also lead to "hunger". When individual resources are occupied by other processes for a long time, the processes waiting for the resources will be unable to start running.

Destruction of inalienable conditions

method:

When a process that has maintained some inalienable resources fails to meet its request for new resources, it must release all the resources it has maintained and reapply when necessary in the future. This means that the resources already occupied by a process will be temporarily released, or deprived, or the inalienable conditions will be destroyed.

Disadvantages:

The implementation of this strategy is complex. Releasing the obtained resources may cause the failure of the previous stage. Repeatedly applying and releasing resources will increase the system overhead and reduce the system throughput. This method is often used for resources whose state is easy to save and restore, such as CPU registers and memory resources, and generally can not be used for resources such as printers.

Break loop waiting condition

method:

In order to destroy the cyclic waiting condition, the sequential resource allocation method can be used. First, number the resources in the system, and stipulate that each process must request resources in the order of increasing the number. Similar resources are applied at one time. That is, as long as the process applies for resource a allocation, the process can only apply for resources with a number greater than a in future resource applications.

Disadvantages:

The problem with this method is that the number must be relatively stable, which limits the increase of new types of equipment; Although the order in which most jobs actually use these resources has been taken into account when numbering resources, it often happens that the order in which jobs use resources is different from the order specified by the system, resulting in a waste of resources; In addition, this method of applying for resources in the specified order will inevitably bring trouble to users' programming.

From the above analysis, we can see that the destruction of deadlock conditions is often accompanied by the increase of system overhead and the loss of system performance. Therefore, we often need to consider how to avoid deadlock rather than how to prevent deadlock.

Common methods to avoid deadlock:

1. Avoid one thread acquiring multiple locks at the same time;

2. Avoid a thread occupying multiple resources in the lock at the same time, and try to ensure that each lock occupies only one resource.

3. Try to use the lock of timing lock Try lock (timeout) instead of using internal lock mechanism

4. For database locks, locking and unlocking must be in a database connection, otherwise unlocking will fail.

Finally, let's take a look at a classic deadlock code case:

package com.mybatis.test;

/**
 * @ClassName DeadLockTest
 * @Description Deadlock test
 * @Author chengjunyu
 * @Date 2022/1/22 11:58
 * @Version 1.0
 */
public class DeadLockTest {

    private static String sourceA = "SourceA";
    private static String sourceB = "SourceB";

    public static void main(String[] args) {
        new DeadLockTest().deadLock();
    }

    private void deadLock() {
        Thread thread = new Thread(new Runnable() {
            @Override
            public void run() {
                synchronized (sourceA) {
                    try {
                        Thread.currentThread().sleep(2000);
                    }catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                    synchronized (sourceB) {
                        System.out.println(sourceB);
                    }
                }
            }
        });

        Thread thread1 = new Thread(new Runnable() {
            @Override
            public void run() {
                synchronized (sourceB) {
                    synchronized (sourceA) {
                        System.out.println(sourceA);
                    }
                }
            }
        });
        thread.start();
        thread1.start();
    }
}

Topics: Concurrent Programming