Summary of common functions of multi process and multi thread programming in linux

Posted by jpschwartz on Thu, 24 Feb 2022 09:08:40 +0100

Multi process

Create process

When executing a program in multi process mode, you need to create a process before forking

#include <unistd.h>
//Create a process and return the number of the process in the program
pid_t fork(void);

//Return process number
pid_t getpid(void);

be careful:

  • Child processes and parent processes share code segments, but copy data segments
  • The fork function in the parent process returns the number of the child process, and the fork function in the child process returns 0
  • If the child process ends prior to the parent process, the child process will become a zombie process. Here, you need to set a signal processing function to avoid it
  • If the parent process ends prior to the child process, the child process will become an orphan process and will be hosted by the system and run in the background
  • The data between the child process and the parent process are independent of each other. Although pointers are used, they cannot affect other processes

Ways to avoid generating zombie processes:

#include <signal.h>
//Ignore the exit signal of child process to avoid zombie process
signal(SIGCHLD,SIG_INT);

Since the child process created by fork function will completely copy the parent process, but we don't want this effect when using it, we need to use exec family functions to replace the process:

// The comments after the function are composed in the order of parameter format, whether to take a path and whether to use the current environment variable
int execl(const char * path,const char * arg, ...);// list without yes
int execlp(const char * file,const char * arg, ...);// list band is
int execle(const char * path,const char * arg, ... char const *envp[]);// list does not need to assemble its own environment variables
int execv(const char * path,char* const argv[]);// array without yes
int execvp(const char * file,char* const argv[]);// array band is
int execve(const char * path,char* const argv[], char const *envp[]);// array does not need to assemble its own environment variables

As mentioned above, the child process and the parent process are independent of each other. Although the use of pointers cannot affect each other, the communication between multiple processes requires specific functions.

Interprocess communication

Communication between multiple processes is mainly in the form of pipeline, semaphore, message queue, signal, shared memory, memory mapping and socket.

The Conduit

Pipes are divided into named pipes and nameless pipes. For named pipe FIFO, IO operations are basically the same as those of ordinary pipes, but there is a main difference between them. In named pipes, pipes can be created in advance, for example, we execute them on the command line
mkfifo myfifo
To create a named channel, we must use the open function to explicitly establish the channel connected to the pipeline. In the pipeline, the pipeline has been created in the main process, and then directly copy the relevant data during fork or pass the file descriptor of the pipeline as a parameter when creating a new process with exec.
Generally speaking, FIFO is always blocked like PIPE. That is, if the read permission is set when the named pipeline FIFO is opened, the read process will block until other processes open the FIFO and write data to the pipeline. This blocking action is also true in turn. If you do not want to block the named PIPE operation, you can use o when opening_ Nonblock flag to turn off the default blocking operation.
Unnamed Pipes

    #include <unistd.h> 
    // 0 is returned for success and - 1 is returned for failure.
    int pipe(int pipe_interface[2]);// To create a pipeline, this function fills in two new file descriptors on the array
    // Lieru
    int fd[2]
    int result = pipe(fd);

Access data by using the underlying read and write calls. To pipe_interface[1] write data from pipe_ Read data in interface [0]. The order principle of writing and reading is first in first out.
Pipeline read / write rules
When no data is readable
O_NONBLOCK disable: the read call is blocked, that is, the process suspends execution until data arrives.
O_NONBLOCK enable: the read call returns - 1, and the errno value is EAGAIN.
When the pipe is full
O_NONBLOCK disable: the write call blocks until a process reads the data
O_NONBLOCK enable: the call returns - 1, and the errno value is EAGAIN
If the file descriptors corresponding to all pipeline write ends are closed, read returns 0
If the file descriptors corresponding to all pipeline read ends are closed, the write operation will generate the signal SIGPIPE
When the amount of data to be written is not greater than pipe_ When buf (Posix.1 requires PIPE_BUF of at least 512 bytes), linux will ensure the atomicity of writing.
When the amount of data to be written is greater than pipe_ When buf, linux will no longer guarantee the atomicity of writing.

name pipes

#include <sys/types.h> 
#include <sys/stat.h>
// If successful, it returns 0; otherwise, it returns - 1. The error reason is stored in errno. 
int mkfifo(const char *fileName, mode_t mode); // Create a named pipe named fileName, and mode is the permission of the file (mode%~umask)
// for example
mkfifo( "/tmp/cmd_pipe", S_IFIFO | 0666 );

Named pipeline is a special file that exists in the system, so it can allow two processes without kinship to communicate

Semaphore

A semaphore is a counter that is often used as a lock. The value of the semaphore is usually an integer value. wait and signal operations are usually used to change the value of the semaphore, which is also called PV operation
p: If the value of the semaphore is greater than 0, subtract 1 from it; If its value is equal to 0, the execution of the process is suspended
5: If another process is suspended due to waiting for sv, let it resume operation; If no other process is suspended waiting for sv, add 1 to it

//Gets or creates a semaphore
int semget(key_t key, int nsems, int semflg);
    parameter key Is the key value of the semaphore, typedef unsigned int key_t,Is the number of semaphores in the system. The numbers of different semaphores cannot be the same, which is guaranteed by the programmer
        key Hexadecimal representation is better.
    parameter nsems Is the number of semaphores in the created semaphore set. This parameter is only valid when creating a semaphore set. Here, it is fixed to fill in 1.
    parameter sem_flags Is a set of flags. If you want to create a new semaphore when the semaphore does not exist, you can use the and value IPC_CREAT Do bit by bit or operation. If not set IPC_CREAT Flag and the semaphore does not exist, an error will be returned( errno The value of is 2, No such file or directory). 
    If semget If the function succeeds, the identifier of the semaphore set is returned; Failure Return-1,Error reason exists in error Yes.
    
//Control semaphore (commonly used to set the initial value of semaphore and destroy semaphore)
int semctl(int semid, int sem_num, int command, ...);
    parameter semid By semget The semaphore ID returned by the function.
    parameter sem_num Is the subscript on the semaphore set array, indicating a semaphore. Fill in 0.
    parameter cmd It refers to the types of commands for semaphore operation. There are two commonly used commands:
        IPC_RMID: Destroy the semaphore without the fourth parameter;
        SETVAL: Initialize the value of the semaphore (after the semaphore is successfully created, the initial value needs to be set). This value is determined by the fourth parameter. The fourth parameter is a custom community, as follows:
    If semctl Function call failure return-1
    // Community for signal lamp operation.
  union semun
  {
    int val;
    struct semid_ds *buf;
    unsigned short *arry;
  };
  
//The value of the waiting semaphore changes to 1. If the waiting is successful, set the value of the semaphore to 0 immediately. This process is also called waiting lock;
//Setting the value of the semaphore to 1 is also called releasing the lock.
int semop(int semid, struct sembuf *sops, unsigned nsops);
    parameter semid By semget The semaphore ID returned by the function.
    parameter sops Is a structure, as follows:
    parameter nsops Is the number of operation semaphores, i.e sops The number of structural variables. Set it to 1 (only for the operation of one semaphore)
struct sembuf
{
  short sem_num;   // The number of semaphore sets. A single semaphore is set to 0.
  short sem_op;    // Semaphore data to be changed in this operation: - 1 - wait for operation; 1 - send operation.
  short sem_flg;   // Set this flag to SEM_UNDO, the operating system will track this semaphore.
                   // If the semaphore is not released when the current process exits, the operating system will release the semaphore to avoid deadlock of resources.
};

signal

Signaling is a signaling mechanism based on UNIX system, which is used to transfer asynchronous signals between one or several processes. Signals can be generated by various asynchronous events, such as keyboard interrupt, etc. The shell can also use signals to pass job control commands to its child processes.

#include <sys/types.h> 
#include <signal.h> 
// Set signal interception processing function
void (*signal(int ,void (*func)(int)))(int); 
	sig: Signals that need to be intercepted and processed
	func: Function pointer used to process signal
	Return value: returns the pointer of the previous processing function of the signal
	
// Send signal to process with pid process number
int kill(pid_t pid,int sig);//The signal value is sig. When pid is 0, send signal sig to all processes of the current system.
int raise(int sig);// Send a signal to the current process.

Shared memory

Shared memory is an inter process communication method that shares memory areas among multiple processes. IPC creates a special address range for the process, which will appear in the address space of the process (where is the address space here?) Yes. Other processes can connect the same shared memory to their own address space. All processes can access addresses in shared memory as if they were allocated by malloc. If a process writes data to shared memory, the changes will be immediately seen by other processes.
Shared memory is the quickest way of IPC, because there is no intermediate process in the communication of shared memory, while pipes, message queues and other methods need to convert the data through the intermediate mechanism. The shared memory method directly maps a certain memory segment. The shared memory between multiple processes is the same physical space, which is only mapped to different addresses of each process. Therefore, there is no need to copy, and this space can be used directly.
Note: shared memory itself has no synchronization mechanism, which needs to be controlled by programmers.

#include <sys/ipc.h>
#include <sys/shm.h>

//Create shared memory
int shmget(key_t key, size_t size, int shmflg);
    key:The key value of shared memory is an integer, typedef unsigned int key_t,Is the number of shared memory in the system. Different shared memory numbers cannot be the same, which is guaranteed by the programmer.
        key Hexadecimal representation is better.
    size:The size of the shared memory to be created, in bytes.
    shmflg:The access permission of shared memory is the same as that of files, 0666|IPC_CREAT Indicates that all users can read and write it. If the shared memory does not exist, create a shared memory.
    Returns the identifier of the shared memory

//Connect shared memory to the current process
void *shmat(int shm_id, const void *shm_addr, int shmflg);
    parameter shm_id By shmget The shared memory ID returned by the function.
    parameter shm_addr Specifies the address location where the shared memory is connected to the current process. It is usually empty, indicating that the system is allowed to select the address of the shared memory.
    parameter shm_flg Is a set of flag bits, usually 0.
    A pointer to the first byte of shared memory is returned when the call is successful. If the call fails, it is returned-1.

//Split shared memory from current process
int shmdt(const void *shmaddr);
    parameter shmaddr yes shmat The address returned by the function.
    0 is returned when the call succeeds and 0 is returned when the call fails-1.

//Delete shared memory
int shmctl(int shm_id, int command, struct shmid_ds *buf);
    parameter shm_id yes shmget The shared memory identifier returned by the function.
    parameter command fill IPC_RMID. 
    parameter buf Fill in 0.

Note:
1. After the shared memory is created, it will not be released with the end of the program, but will be common among processes of the whole system,
2. The contents of the shared memory will not be emptied automatically, and the last written value will be saved until it is changed
3. There is no lock in the shared memory. Conflicts may occur when multiple programs operate on the shared memory at the same time
4. Semaphores can act as locks for shared memory
5. View the shared memory of the current system: ipcs -m
6. Delete a shared memory: ipcrm -m shared memory number

Memory mapping

Memory mapping file is the mapping from a file to a piece of memory. The memory mapping file is somewhat similar to the virtual memory. An address area can be reserved through the memory mapping file,
At the same time, submit the physical storage to this area. The physical storage mapped by the memory file comes from a file that already exists on the disk, and the file must be mapped before operating the file. When you use memory mapped files to process files stored on disk, you no longer have to perform I/O operations on the files. Each process using this mechanism realizes the communication between multiple processes by mapping the same shared file to its own process address space (this is similar to shared memory. As long as one process operates on the memory of this mapped file, other processes can see it immediately).
Using memory mapped files can not only realize the communication between multiple processes, but also be used to deal with large files to improve efficiency. Because our common practice is to copy the files on the disk to a buffer in the kernel space first, and then to the user space (memory). After modification, the user will copy these data to the buffer and then to the disk file, a total of four copies. If the amount of file data is large, the cost of copying is very large. So the problem is, does the system need no data copy when it is mapping memory? mmap() does not copy the data. The real copy is made when the page is missing. Because mmap() directly maps the file to the user space, the interrupt processing function directly copies the file from the hard disk to the user space according to this mapping relationship, so it only copies the data once. The efficiency is higher than that of read/write.

#include <sys.mman.h> 
void *mmap(void*start,size_t length,int prot,int flags,int fd,off_t offset); //The mmap function maps a file or other object into memory. The first parameter is the starting address of the mapping area. Setting it to 0 means that the starting address of the mapping area is determined by the system. The second parameter is the length of the mapping, the third parameter is the expected memory protection flag, the fourth parameter specifies the type of mapping object, and the fifth parameter is the file descriptor (indicating the file to be mapped), The sixth parameter is the starting point of the mapped object content. The pointer of the mapped area is returned successfully, and map is returned in case of failure_ Failed [its value is (void *)-1].
int munmap(void* start,size_t length); //The munmap function is used to cancel the start address of the mapped memory referred to by the parameter start, and the parameter length is the memory size to be cancelled. If the unmapping is successful, it returns 0. Otherwise, it returns - 1. The error reason is stored in errno and the error code EINVAL. 
int msync(void *addr,size_t len,int flags); //msync function realizes that the contents of disk files are consistent with those of shared memory, that is, synchronization. The first parameter is the address of the file mapped to the process space, the second parameter is the size of the mapping space, and the third parameter is the parameter setting of refresh.

Multithreading

Thread creation, wait, exit and cleanup

#include <pthread.h>
//Create a thread
int pthread_create(pthread_t *thread, const pthread_attr_t *attr,void *(*start_routine) (void *), void *arg);
    parameter thread Is the address that points to the thread identifier.
    parameter attr It is used to set thread properties. Generally, it is empty, indicating that the default property is used.
    parameter start_routine Is the address of the thread running function. Just fill in the function name.
    parameter arg Is the parameter of the thread running function. Newly created thread from start_routine The address of the function starts running, and the function has only one typeless pointer parameter arg. 
    If successful, return 0; If there is an error, the error number is returned.

Note: to start_ Route passes multiple parameters. You can put multiple parameters in one structure, and then pass the address of the structure as arg parameter. However, you should be very careful. Programmers generally don't do this.

#include <pthread.h>
//The normal exit of the thread is similar to return
void pthread_exit(void *retval);
    parameter retval Generally fill 0
 Note: call in thread exit The function exits the entire process
    There are three ways to terminate a thread:
    1)Threaded start_routine Function code ends and dies naturally.
    2)Threaded start_routine function call pthread_exit end.
    3)Aborted by the main process or another thread.   
After the child thread exits, its resources will not be released automatically, resulting in zombie threads

#include <pthread.h>
//Block the process, wait for the return of the child process, and recycle the resources of the child process
int pthread_join(pthread_t thread, void **value_ptr);
    thread: The thread number of the thread waiting to exit.
    value_ptr: Save the return value when the thread function exits.
    Return value:0 Represents success. If it fails, the error number is returned, which can be used to judge the exit status of the thread 

#include <pthread.h>
//Set the property of the thread to PTHREAD_CREATE_JOINABLE, which will automatically release resources when exiting
int pthread_detach(pthread_t tid);
    tid: Thread identifier
    Return value: zero is returned successfully. Any other return value indicates an error

After the thread exits, there is a great possibility that zombie processes will occur. After work, list several solutions

//Four ways to solve the zombie process
1)Method 1: after creating the thread, call it in the program creating the thread. pthread_join This method is generally not used when waiting for the thread to exit, because pthread_join Blocking will occur.
  pthread_join(pthid,NULL);
2)Method two: before creating thread, call pthread_attr_setdetachstate Set thread to detached,In this way, the system automatically reclaims thread resources when the thread exits.
  pthread_attr_t attr;
  pthread_attr_init(&attr);
  pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED);  // Set the properties of the thread.
  pthread_create(&pthid,&attr,pth_main,(void*)((long)TcpServer.m_clientfd);
3)Method three: after creating the thread, call it in the program creating the thread. pthread_detach Set the newly created thread to detached Status.
  pthread_detach(pthid);
  4)Method four: calling in online main function pthread_detach Change your state.
  pthread_detach(pthread_self());

The aftermath function of thread exit is generally not written in the main function of the thread, because the thread does not necessarily exit at a certain place, so it is necessary to register the thread cleanup function in advance

#include <pthread.h>
//Register thread cleanup function
void pthread_cleanup_push(void(*routine)(void*),void,argv);
    routine:The name, return value and parameter of the registered cleanup function are void*type
    argv:Parameters passed to the cleanup function
    
// Eject or eject and execute a cleanup function
void pthread_cleanup_pop(int execute);
    execute:When it is 0, only one cleanup function will pop up and not execute. When it is not 0, one cleanup function will pop up and execute
 Note:
    1,The two functions must appear in the same statement block in pairs, otherwise an error will be reported
    2,The registration list of clean-up functions is similar to the stack. The later registered functions pop up first 
    3,When a thread calls pthread_exit()When the function terminates, all cleanup functions are executed
    4,When the process passes the call return When the function terminates, the cleanup function is not called(But it will actually be implemented?)

Thread synchronization

Thread synchronization generally uses mutually exclusive locks, conditional variables, semaphores, spin locks, read-write locks, etc

mutex

As the name suggests, mutex can only be passed by one process at a time. In practice, the mutex will have the problem of priority wake-up: one thread will always lock the same mutex multiple times in a row.

#include <pthread.h>

pthread_mutex_t mutex;//Declaration creates a mutex

//Initialize mutex
int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutex_attr_t *mutexattr);
Where parameters mutexattr Used to specify the attribute of the lock (see below). If it is NULL The default attribute is used.
The attribute of mutex lock is specified when creating a lock. When a resource is locked by a thread, other threads will behave differently when trying to lock. There are currently four values to choose from:
1)PTHREAD_MUTEX_TIMED_NP,This is the default, which is a normal lock. When a thread is locked, the other threads requesting the lock will form a waiting queue and obtain the lock according to priority after unlocking. This locking strategy ensures the fairness of resource allocation.
2)PTHREAD_MUTEX_RECURSIVE_NP,Nested locks allow the same thread to successfully obtain the same lock multiple times and pass it multiple times unlock Unlock.
3)PTHREAD_MUTEX_ERRORCHECK_NP,Error detection lock. If the same thread requests the same lock, it returns EDEADLK,Otherwise with PTHREAD_MUTEX_TIMED_NP The type action is the same.
4)PTHREAD_MUTEX_ADAPTIVE_NP,Adaptive lock, the simplest type of lock, will compete again after unlocking.

//Blocking and locking
int pthread_mutex_lock(pthread_mutex *mutex);
    If the lock is idle, this thread will acquire the lock and lock it
    If the lock has been occupied, the thread will queue until it successfully obtains the lock.
//Non blocking locking
int pthread_mutex_trylock( pthread_mutex_t *mutex);
    The semantics of this function is similar to pthread_mutex_lock() similar,
    The difference is to return immediately when the lock has been occupied EBUSY,Not a pending wait.

//Unlock
int pthread_mutex_unlock(pthread_mutex *mutex);
    The thread releases the lock it holds

//Destroy lock
int pthread_mutex_destroy(pthread_mutex *mutex);
    The lock must be idle before it can be destroyed( unlock)
    If it is not idle during destruction, return EBUSY

Conditional variable

Unlike mutexes, conditional variables can be released to multiple processes at the same time. Threads that use condition variables while waiting for a condition variable. When the condition variable is released, all processes blocking the variable enter the running state

#include <pthread.h>

pthread_cond_t cond;//Declare a conditional variable

//Initialize a condition variable
int pthread_cond_init(pthread_cond_t *restrict cond, 
                       const pthread_condattr_t *restrict attr); 
    cond:Code of condition variable                       
    attr:Conditional variable properties,General filling NULL Indicates the default value

//Destroy a condition variable 
int pthread_cond_destroy(pthread_cond_t *cond); 

//The permanent blocking waiting condition variable is satisfied
int pthread_cond_wait(pthread_cond_t *restrict cond, 
                       pthread_mutex_t *restrict mutex); 
    cond:Condition variable to wait
    mutex:Mutex using                  
In this function, there are three steps:
    1,First release the mutex mutex
    2,Start blocking and wait for the condition variable to be met
    3,When the condition variable is satisfied, the mutex is given mutex Lock and unblock
 Note: use pthread_cond_wait()After that, if it is no longer called, it needs to be released manually mutex,Otherwise wait queue
    Other threads in will not be looped

//Wait for a conditional variable for a limited time 
int pthread_cond_timedwait(pthread_cond_t *restrict cond,
                           pthread_mutex_t *restrict mutex,
                           const struct timespec *restrict abstime); 
    cond:Condition variable to wait
    mutex:Mutex using
    abstime: The absolute time to wait for the end. The structure is defined as follows
        struct timespec {
            time_t tv_sec; //second
           long tv_nsec; //nanosecond
        }        
Note: the normal use method of time structure is as follows:
    time_t cur = time(NULL); Gets the current time.
    struct timespec t; definition timespec Structure variable t
    t.tv_sec = cur+1; Timing 1 second
    pthread_cond_timedwait (&cond, &mutex, &t) (Current 1s wait condition variable)

//Wake up a thread blocked on a condition variable  
int pthread_cond_signal(pthread_cond_t *cond); 
    This function wakes up the first thread on the condition variable waiting queue
    
//Wake up all threads blocked on condition variables 
int pthread_cond_broadcast(pthread_cond_t *cond); 

Semaphore

Semaphore usage can limit the number of threads started at the same time. By setting the initial value of the semaphore, as long as the semaphore is not 0, you can release a thread and set the value of the semaphore to - 1

#include <semaphore.h>

sem_t sem;//Declare a semaphore

//Create semaphore
int sem_init(sem_t *sem, int pshared, unsigned int value)
    sem:Semaphore name
    pshared:If equal to 0, it is used for synchronization of the same multithread
            If greater than 0, it is used for synchronization of multiple related processes (i.e fork (generated)
    value:Initial value of semaphore   

//Destroy a semaphore
int sem_destroy(sem_t *sem);     

//Waiting semaphore
int sem_wait(sem_t *sem);//Keep waiting
int sem_trywait(sem_t *sem);//Try to access, if not, do not block
int sem_timedwait(sem_t *sem, const struct timespec *abs_timeout)//If timeout occurs, do not wait
 Note: these three functions will change the value of the semaphore-1,If it is less than or equal to 0, the blocking waiting signal becomes greater than 0

//Release semaphore
int sem_post(sem_t *sem);
Note: this function will change the value of the semaphore+1,If semaphore>0, Can be obtained

//Gets the value of the semaphore
int sem_getvalue(sem_t *restrict, int *val);
    restrict: Semaphore name
    val:Address for storing semaphore value

Spin lock

Spinlocks and mutexes are basically the same
There are only differences in the processing mechanism:
When the mutex is waiting, it will make the applicant sleep and block
When the spin lock waits, it will always cycle to judge whether the conditions are met, waste CPU resources, and obtain the lock faster

#include <pthread.h>

pthread_spinlock_t mutex;//Declare to create a spin lock

//Initialize spin lock
int pthread_spin_init(pthread_spinlock_t *spin, int pshare);
    spin:Spin lock name
    pshare:share property:
        PTHREAD_PROCESS_SHARED:Can be shared by other processes
        PTHREAD_PROCESS_PRIVATE:Can only be used in this thread
        
//Blocking and locking
int pthread_spin_lock(pthread_spinlock_t *mutex);
    If the lock is idle, this thread will acquire the lock and lock it
    If the lock has been occupied, the thread will queue until it successfully obtains the lock.

//Non blocking locking
int pthread_mutex_trylock( pthread_spinlock_t *spin);
    The semantics of this function is similar to pthread_mutex_lock() similar,
    The difference is to return immediately when the lock has been occupied EBUSY,Not a pending wait.
    
//Unlock
int pthread_mutex_unlock(pthread_spinlock_t *spin);
    The thread releases the lock it holds

//Destroy lock
int pthread_mutex_destroy(pthread_spinlock_t *spin);
    The lock must be idle before it can be destroyed( unlock)
    If it is not idle during destruction, return EBUSY

Read write lock

The read-write lock can release one write process or multiple read processes at the same time
1. If the current lock has been read, other threads can apply for a read lock, but cannot apply for a write lock
2. If the current thread has been write locked, other threads cannot apply for read lock or write lock
The read-write lock has no hard rules. You can't write when holding the read lock, and you can't read when holding the write lock. The attribute of the read-write lock should be maintained by the program's own area

#include <pthread.h>

pthread_rwlock_t rwlock;//Declare a read-write lock

//Initialize read / write lock
int pthread_rwclock_init(pthread_rwlock_t *rwptr, const pthread_rwlockattr_t *attr);
//Destroy the read-write lock
int pthread_rwclock_destory(pthread_rwlock_t *rwptr);

//The blocking thread adds a read lock. If the read-write lock is held by a writer, it will be blocked
int pthread_rwclock_rdclock(pthread_rwlock_t *rwptr);
//The blocking thread adds a write lock. If the read-write lock is occupied by the read or write thread, it will block the wait
int pthread_rwclock_wrclock(pthread_rwlock_t *rwptr);

// In non blocking mode, an EBUSY error is returned if it cannot be obtained immediately
int pthread_rwclock_tryrdclock(pthread_rwlock_t *rwptr);
int pthread_rwclock_trywrclock(pthread_rwlock_t *rwptr);

//Release lock
int pthread_rwclock_unclock(pthread_rwlock_t *rwptr);

//Set the properties of the read-write lock
int pthread_rwclock_setpshared(const pthread_rwlockattr_t *attr, int valptr);
        The second parameter represents the attribute to be set. The optional values are as follows
        PTHREAD_PROCESS_PRIVATE: Available only in this process
        PTHREAD_PROCESS_SHARED: Can be shared by multiple processes

Topics: Linux Operation & Maintenance server computer networks