iOS process, thread, lock

Posted by bapan on Fri, 07 Jan 2022 03:22:03 +0100

Relationship between process and thread

A process is an application running in the system. Each process is independent. Each process runs in its dedicated and protected memory space. If a process wants to perform tasks, it must have a thread (each process must have at least one thread, called the main thread). All tasks of a process (program) are executed in threads,
At the same time, the CPU can only process one thread, only one thread is working (executing), and multiple threads execute concurrently (simultaneously). In fact, the CPU schedules (switches) between multiple threads quickly. If the CPU schedules threads fast enough, it will create the illusion of multiple threads executing concurrently.

NSThread

NSThread is the basic class of OC thread. You can control thread execution through start sleep cancel.

// Method 1: to create a thread, you need to start the thread yourself (controllable)
    NSThread *thread = [[NSThread alloc]initWithTarget:self selector:@selector(run) object:nil];
    // Open thread
    [thread start];
    
    // Method 2: automatically start the thread after creating the thread
    [NSThread detachNewThreadSelector:@selector(run) toTarget:self withObject:nil];
    // Method 3: implicitly create and start threads
    [self performSelectorInBackground:@selector(run) withObject:nil];

GCD

GCD will automatically utilize more CPU cores
GCD automatically manages the life cycle of threads (creating threads, scheduling tasks, destroying threads, etc.)

There are two core concepts in GCD: task and queue
task
Synchronization: a task can only be executed in the current thread. It does not have the ability to start a new thread. The task will be executed immediately. It will Block the current thread and wait for the task in the Block to complete, and then the current thread will continue to run
Asynchronous: it can execute tasks in new threads and has the ability to open new threads, but it does not necessarily open new threads. The current thread will execute directly down without blocking the current thread

queue
Determine how to execute some operations when they are added to a queue. First, see whether to add tasks to the serial or parallel queue. If it is serial, it must be executed in sequence. If it is parallel and asynchronous, it will be cross executed.
//Global concurrent queue
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
//Serial queue
dispatch_queue_t queue = dispatch_queue_create("test", DISPATCH_QUEUE_SERIAL);
//Concurrent queue
dispatch_queue_t queue1 = dispatch_queue_create("test", DISPATCH_QUEUE_CONCURRENT);

The fence function can control the order of task execution. After the execution of the previous fence function is completed, the fence function is executed, and then after the execution of the fence function (concurrent queue)
dispatch_barrier_async(queue, ^{ });

Queue group
When there are multiple asynchronous tasks, dispatch is executed after all the tasks in the group are executed_ group_ Notify can monitor and control business processes.

 
    // Create queue group
    dispatch_group_t group = dispatch_group_create();
    // Create parallel queue
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    // Perform queue group tasks
    dispatch_group_async(group, queue, ^{
        NSLog(@"Asynchronous task 1");
    });
    // Perform queue group tasks
    dispatch_group_async(group, queue, ^{
        NSLog(@"Asynchronous task 2");
    });
    //After the tasks in the queue group are completed, execute this function
    dispatch_group_notify(group, queue, ^{
        NSLog(@"After completing the task in the group, execute here");
    });

Semaphore

// Create a semaphore and set the value to 0
    dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    
    dispatch_async(queue, ^{
        //Sending a signal will naturally add 1 to the total signal,
        dispatch_semaphore_signal(semaphore);
    });
    //Block main thread
    //When the total signal amount is less than 0, it will wait all the time, otherwise it can be executed normally, and make the total signal amount - 1
    dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
    NSLog(@"Wait for semaphore greater than 0 before executing ");

//Concurrency control using semaphores (5 times)
    dispatch_semaphore_t sema = dispatch_semaphore_create(5);
    for (100 Second cycle operation) {
        dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
            // operation
            dispatch_semaphore_signal(sema);
        });
    }

dispatch_group_enter and dispatch_group_leave
Using dispatch_group_enter and dispatch_group_leave controls the thread, because the two must correspond to each other one by one. Only after leave can the next enter be performed. For example, these two functions are used to realize the thread synchronization problem that multiple requests of AFNetWorking depend on, so that the next asynchronous request can be executed after the previous one is executed. It does not need to be written in the callback.

dispatch_group_t group = dispatch_group_create();
    dispatch_group_enter(group);
    AFNetWorking{
        dispatch_group_leave(group);
        NSLog(@"Asynchronous task 1");
    }
    dispatch_group_enter(group);
    AFNetWorking{
        dispatch_group_leave(group);
        NSLog(@"Asynchronous task 2");
    }   
    dispatch_group_notify(group, queue, ^{
        NSLog(@"After completing the task in the group, execute here");
    });

NSOperationQueue

NSOperation is Apple's encapsulation of GCD. It is completely object-oriented and has more simple and practical functions than GCD, so it is more convenient and easy to understand. Its main advantages include setting dependency, priority setting, inheritability and key value pair observation. NSOperation and NSOperationQueue correspond to GCD tasks and queues respectively.
Specific steps for implementing multithreading with NSOperation and NSOperationQueue
1. Encapsulate the operations to be executed into an NSOperation object
2. Add the NSOperation object to the NSOperationQueue
The system will automatically take out the NSOperation in the NSOperationQueue and put the operations encapsulated by the taken NSOperation into a new thread for execution
If an NSOperation is added to the NSOperationQueue, the system will automatically perform the operations in the NSOperation asynchronously
3. Maximum concurrent number: when maxConcurrentOperationCount is 1, it means that no thread is opened, that is, serial. The default is - 1. If it is greater than 1, it is parallel.
4.NSOperationQueue can be cancelled or suspended. Pause and cancel do not cancel the current operation immediately, but wait until the current operation is completed

Operation dependency

// Create non home queue
    NSOperationQueue *queue = [[NSOperationQueue alloc]init];
    // Download the first picture
    NSBlockOperation *download1 = [NSBlockOperation blockOperationWithBlock:^{
    }];
    // Download the second picture
    NSBlockOperation *download2 = [NSBlockOperation blockOperationWithBlock:^{
    }];
    // Synthesis operation
    NSBlockOperation *combie = [NSBlockOperation blockOperationWithBlock:^{
        // Return to the main thread to refresh the UI
        [[NSOperationQueue mainQueue]addOperationWithBlock:^{
            //Composite picture
        }];
    }];
    // To add dependencies, you need to wait for picture 1 and picture 2 to be downloaded before synthesizing pictures
    [combie addDependency:download1];
    [combie addDependency:download2];
    // Add operation to queue
    [queue addOperation:download1];
    [queue addOperation:download2];
    [queue addOperation:combie];

Read write lock

self.concurrentQueue = dispatch_queue_create("aaa", DISPATCH_QUEUE_CONCURRENT);
// Write operations and fence functions are not allowed to be concurrent, so "write operations" are entered by a single thread, which can be seen from the log
- (void)setText:(NSString *)text {
    
    __weak typeof(self) weakSelf = self;
    dispatch_barrier_sync(self.concurrentQueue, ^{
        __strong typeof(weakSelf) strongSelf = weakSelf;
        strongSelf->_text = text;
        NSLog(@"Write operation %@ %@",text,[NSThread currentThread]);
        // Simulate time-consuming operations, one by one, without concurrency
        sleep(1);
    });
}
// The read operation can be concurrent, and the log will be printed out in a short time
- (NSString *)text {
 
    __block NSString * t = nil ;
    __weak typeof(self) weakSelf = self;
    dispatch_sync(self.concurrentQueue, ^{
        __strong typeof(weakSelf) strongSelf = weakSelf;
        t = strongSelf->_text;
        // Simulate the time-consuming operation and execute it in an instant, indicating that multiple threads enter concurrently
        sleep(1);
 
    });
    return t;
 
}

Mutex lock simulates ticket grabbing. Multiple threads access and change a variable at the same time

-(void)saleTicket
{
    while (1) {
        // create object
        // self.obj = [[NSObject alloc]init];
        // The lock object itself is an object, so self is OK
        // When locked, other threads have no way to access this code
        @synchronized (self) {
            // Simulated ticket selling time, we let the thread rest for 0.05s
            [NSThread sleepForTimeInterval:0.05];
            if (self.numTicket > 0) {
                self.numTicket -= 1;
                NSLog(@"%@Sold a ticket and left%zd Ticket",[NSThread currentThread].name,self.numTicket);
            }else{
                NSLog(@"The tickets have been sold out");
                break;
            }
        }
        //It can also be implemented using the NSLock object
        NSLock *lock = [[NSLock alloc]init];
        [lock lock];
        //Safe
        [lock unlock];
    }
}

If the numTicket object is set to atomic, the mutex is used by default, which is equivalent to @ synchronized (self)
@property(automic,assign)NSInteger numTicket;

Memory allocation for a process

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-4fsj3sce-1641521259415) (EV) ernotecid://3E3686D7-D03E-4A08-92FE-240DDA25A737/appyinxiangcom/20486464/ENResource/p185 )]

Each area actually stores the corresponding content. The code area, constant area and static area are automatically loaded and released by the system after the process is completed. Developers do not need to pay attention.

The stack area generally stores local variables and temporary variables, which are automatically allocated and released by the compiler. Each thread corresponds to a stack when running. The heap area is used for dynamic memory application, which is allocated and released by the programmer. Generally speaking, the stack area is faster because it is automatically managed by the system, but it is not as flexible as the heap area.

For Swift, the value type is stored in the stack area and the reference type is stored in the heap area. Typical value types are struct, enum and tuple, all of which are value types. For example, Int, Double, Array, Dictionary, etc. are actually implemented with structures and value types. class and closure are reference types, that is, if we encounter classes and closures in Swift, we should keep an eye and consider their references.

Topics: iOS xcode objective-c