What are the components of the operating system?
Process management, storage management, device management, file management, program interface, user interface.
Operating system page change method: FIFO/LRU/OPT
Memory management of operating system
Partition management Paging management Segmented management Segment page management
Basic characteristics of operating system: Process Based
-
Concurrency:
Parallelism and Concurrency: at the same time. Macroscopically, at the same time; Microscopically, time-sharing is executed alternately.
Process: a program is a static entity; For concurrent execution; The basic unit of independent operation and resource allocation in the system;
Thread: the basic unit of independent operation and independent scheduling; The same process has multiple threads, which can be concurrent, and process resources are shared among threads; -
Sharing: resource sharing or resource reuse
Mutually exclusive sharing mode: resources that can only be accessed by one process in a period of time are called critical resources or exclusive resources, such as printers;
Simultaneous access mode: macro "simultaneous" and micro alternate access; -
Virtual technology: change a physical entity into several logical counterparts.
-
Asynchrony: advance at an unpredictable speed;
Characteristics of batch processing, time-sharing and real-time operating systems
-
The main features of batch operating system are offline, multichannel and batch processing.
Offline refers to that the user uses the computer offline, that is, the user hardly deals with the computer until the results are obtained after submitting the job.
Multiprogramming refers to multiprogramming, that is, according to the scheduling principle of multiprogramming, select multiple jobs from a batch of backup jobs, transfer them into memory and organize them to run;
Batch processing refers to that the operator organizes the jobs submitted by the user into a batch, and the operating system is responsible for the automatic scheduling between each batch of jobs.
The batch processing system has a high degree of automation, large system throughput, high resource utilization and low system overhead. However, the turnover time of each job is long and does not provide interaction means between users and the system. It is suitable for large and mature jobs. -
Compared with the immaturity of the batch processing system, the system has less time-sharing and resource utilization, which is related to the immaturity of the batch processing system. Batch processing and time-sharing are systems that process jobs as a unit. They are a general system.
The time-sharing operating system is a special system, which randomly handles external events. It has real-time, high security and reliability, provides limited human-computer interaction, and the utilization rate of the system is worse than that of batch processing.
The main characteristics of time-sharing operating system: multi-channel, interactive, exclusive and timeliness.Multiplex refers to that one computer is connected with several terminals, and these users on the terminal can use the computer at the same time or basically at the same time;
Interactivity refers to that the user's operation mode is online, that is, the user directly controls the operation of the program and interacts with the program through the terminal in the way of human opportunity conversation;
Exclusivity means that because the system uses the method of time slice rotation to make a computer serve many end users at the same time, the objective effect is that these users can't feel that others are also using the computer, as if they only own the computer;
Timeliness means that the user's request can get a response in a very short time.
-
The main characteristics of real-time operating system are timeliness and high reliability.
Timeliness means that the system can respond to the request of external events in time and complete the processing of the event within the specified time;
High reliability means that the system itself should be safe and reliable, because in real-time transaction systems such as real-time control of production process and air ticket booking, the delay or loss of information processing often brings unimaginable consequences.
When developing Java server network, please explain the difference between blocking / non blocking and synchronous / asynchronous IO in communication.
-
Synchronous / asynchronous is mainly for clients:
-
Synchronization: when a function call is issued by the client, the call will not return and subsequent operations will not be performed until the result is obtained. In other words, you must do one thing at a time. When one thing is finished, you can do the next one.
-
Asynchrony: when the client makes a function call, the caller can continue to perform subsequent operations before getting the result. When the call is completed, the caller is usually notified through status, notification and callback. For asynchronous calls, the return of the call is not controlled by the caller. Although it is mainly aimed at the client, the server is not completely irrelevant. Synchronization / asynchrony can only be realized with the server. Synchronization / asynchrony is controlled by the client itself, but the client does not need to care whether the server is blocked / non blocked.
-
-
Blocking / non blocking is mainly aimed at the server:
- Blocking: blocking call means that the current thread will be suspended before the call result of the callee on the server side is returned. The calling thread will not return until it gets the result.
- Non blocking: the call will not block the current thread, but will return immediately before the result cannot be obtained immediately.
Synchronization is a process, and blocking is a state of a thread. Competition may occur when multiple threads operate on shared variables. At this time, synchronization is needed to prevent more than two threads from entering the critical area at the same time. In this process, the thread that enters the critical area later will be blocked, and it needs to wait for the thread that enters first to leave the critical area.
What are the process states?
- Ready status: the process has obtained other required resources other than the processor and is waiting to allocate processor resources;
- Running status: running with processor resources occupied;
- Blocking state: the state of waiting for resources and unable to continue execution;
The difference between process and thread
Process is the basic unit of operating system for independent operation and resource allocation; Thread is CPU The basic unit of independent operation and independent dispatching; A process can contain multiple threads, all resources of the process are shared among threads, and each thread has its own stack and local variables; Processes have their own independent memory space. Whenever a process is started, the system will allocate space for it and establish a data table to maintain code segments, stack segments and data segments. The operation is expensive; Threads share the data in the process and use the same address space, so CPU Switching a thread costs less. Because threads basically do not have system resources, the overhead of thread switching is far less than that of processes. Multiple threads of different processes can access a single mutually exclusive object, which can ensure that multiple threads access the memory block at the same time, and the data in the memory block will not be damaged. Supplement: Process: the execution instance of an application, with independent memory space and system resources Thread: CPU The basic unit of scheduling and dispatching, the smallest unit of operation in the process, which can complete an independent sequence control process;
Relationship between process and thread
(1)A thread can only belong to one process, and a process can have multiple threads, but at least one thread. Thread is the smallest execution and scheduling unit recognized by the operating system. (2)Resources are allocated to processes, and all threads of the same process share all resources of the process. Multiple threads in the same process share code snippets(Codes and constants),Data segment(Global and static variables),Extended segment(Heap storage). However, each thread has its own stack segment, which is also called run time, and is used to store all local and temporary variables. (3)The processor is allocated to threads, that is, threads are really running on the processor. (4)During the execution of threads, cooperative synchronization is required. The threads of different processes should be synchronized by means of message communication.
Difference between program and process
Program: a collection of computer instructions that are stored on disk as files. A program is a passive Entity. In a multiprogramming system, it cannot run independently, let alone execute concurrently with other programs.
Use of system resources: no use (the program cannot apply for system resources, cannot be scheduled by the system, nor can it be used as an independent operation unit, and it does not occupy the operation resources of the system).
Process: process is the running process of process entities (including program segments, related data segments and process control block PCB). It is an execution activity of a program in its own address space. It is an independent unit of the system for resource allocation and scheduling.
Usage of system resources: usage (the process is the unit of resource application, scheduling and independent operation, so it uses the running resources in the system)
Process control block (PCB for short) is a data structure used to record process status and other related information. PCB is the only sign of process existence. If PCB exists, the process exists. When the system creates a process, a PCB will be generated. When the process is cancelled, the PCB will disappear automatically.
Function of process control block PCB
- Mark as the basic unit of independent operation
-It can realize intermittent operation mode - Provide information required for process communication management
- Provide the information required for process scheduling
What is the critical zone? How to resolve conflicts?
The program that accesses the critical resources in each process is called the critical area. Only one process is allowed to enter the critical area at a time, and no other process is allowed to enter after entering.
-
If several processes are required to enter the idle critical area, only one process is allowed to enter at a time;
-
At any time, there can be no more than one process in the critical zone. If an existing process enters its own critical area, all other processes trying to enter the critical area must wait;
-
The process entering the critical zone should exit within a limited time, so that other processes can enter their own critical zone in time;
-
If the process cannot enter its critical area, it should give up the CPU to avoid the "busy" phenomenon of the process.
The difference between process synchronization and mutual exclusion
-
Synchronization: realize visitors' orderly access to resources on the basis of mutual exclusion;
-
Mutual exclusion: a resource can only be accessed by one process at a time, which is exclusive and unique; Access is disordered when mutually exclusive;
Generally speaking, synchronization embodies a kind of cooperation; Mutual exclusion embodies a kind of exclusivity.
Process (job) scheduling algorithms: 6
1)FCFS First come, first served queue: according to the order in which jobs arrive at the backup job queue (or processes enter the ready queue) 2)Short process (job) priority scheduling algorithm( SPF) Backup queue: select the job with the shortest running time (estimated value) from the job backup queue to run in main memory 3)Priority scheduling algorithm FPF (Non preemptive, preemptive): scheduling according to the priority of the process, so that the high priority process can be prioritized. The more priorities, the less priority. 4)High response ratio priority scheduling algorithm: first calculate the response ratio of each job in the backup job queue at this time RP Then select the job with the largest value and put it into operation 5)Time slice rotation method: all processes in the ready queue can obtain the execution time of a time slice processor within a given time 6)Multilevel feedback queue scheduling algorithm (setting multiple ready queues): according to the different nature and type of jobs, the ready queue is divided into several sub queues. All jobs (or processes) are discharged into the corresponding queue according to their nature, and different ready queues adopt different scheduling algorithms.
Communication modes between processes: 7
The Conduit pipe: Pipeline is a half duplex communication mode. Data can only flow in one direction, and can only be used between processes with kinship. The kinship of process usually refers to the parent-child process relationship. name pipes FIFO: The well-known pipeline is also a half duplex communication mode, but it allows communication between unrelated processes. Message queue MessageQueue: Message queue is a linked list of messages, stored in the kernel and identified by the message queue identifier. Message queue overcomes the disadvantages of less signal transmission information, the pipeline can only carry unformatted byte stream and the limited buffer size. Shared memory SharedMemory: This is the most useful way of inter process communication. It enables multiple processes to access a piece of memory space, and different processes can see the updates of data in shared memory in each other's processes in time. This approach relies on some kind of synchronization, such as mutexes and semaphores. Semaphore Semaphore: A semaphore is a counter that can be used to control the access of multiple processes to shared resources. It is often used as a locking mechanism to prevent other processes from accessing shared resources when a process is accessing them. Therefore, it is mainly used as a synchronization means between processes and between different threads in the same process. signal ( sinal ) : Signal is a complex communication method, which is used to inform the receiving process that an event has occurred. socket Socket: Process communication between different machines in the network is widely used.
Scheduling principle of process entering critical zone
(1)If several processes request to enter the idle critical area, only one process is allowed to enter at a time. (2)At any time, there can be no more than one process in the critical zone. If an existing process enters its own critical area, all other processes trying to enter the critical area must wait. (3)The process entering the critical zone should exit within a limited time, so that other processes can enter their own critical zone in time. (4)If the process cannot enter its critical area, it should give way CPU,Avoid the phenomenon of "busy" in the process.
What is a buffer overflow? What's the harm? What is the reason?
Buffer overflow means that when the computer fills the buffer with data, it exceeds the capacity of the buffer itself, and the overflow data overwrites the legal data. The harm has the following two points: program collapse, leading to denial of service; Jump and execute a piece of malicious code. The main cause of buffer overflow is that the user input is not carefully checked in the program.
Inter thread communication mode
Threads in the same process share the address space. There is no need for communication, but synchronization should be done well/Mutually exclusive to protect shared global variables; 1)Locking mechanism: including mutex lock, conditional variable and read-write lock Mutex: provides an exclusive way to prevent data structures from being modified concurrently. Read / write lock: allows multiple threads to read shared data at the same time, and is mutually exclusive to write operations. Condition variable: block the thread atomically until a specific condition is true. The test of conditions is carried out under the protection of mutex. Conditional variables are always used with mutexes. 2)Semaphore mechanism: counter Includes unnamed thread semaphores and named thread semaphores. 3)Signaling mechanism: communication between threads is mainly used for thread synchronization. Threads have no communication mechanism for data exchange.
Thread (process) synchronization / mutual exclusion mechanism
1,Critical zone:Through the serialization of multithreading to access public resources or a piece of code, it is fast and suitable for controlling data access. 2,mutex :Designed to coordinate individual access to a shared resource. 3,Semaphore:Designed to control a resource with a limited number of users. 4,event:It is used to notify the thread that some events have occurred, so as to start the start of subsequent tasks. Collaborative process is a kind of lightweight thread in user mode, and the scheduling of collaborative process is completely controlled by users. A coroutine has its own register context and stack. During coprocess scheduling switching, the register context and stack are saved to other places. When switching back, the previously saved register context and stack are restored. Directly operating the stack basically has no cost of kernel switching, and the global variables can be accessed without locking, so the context switching is very fast. Thread scheduling is divided into cooperative scheduling and preemptive scheduling, Java Preemptive scheduling is used, that is, the execution time of each thread will be allocated by the operating system, and the thread switching is not determined by the thread itself (collaborative scheduling). This is why the platform is independent.
Deadlock concept
It refers to that two or more threads hold the resources required by each other, resulting in these threads being in a waiting state and unable to execute.
Causes of Deadlock:
1. Competitive resources; 2. Improper progress sequence.
Four conditions for deadlock generation
Mutually exclusive condition: a resource can only be used by one process at a time. Request and hold condition: when a process is blocked by requesting resources, it will hold on to the obtained resources. Conditions of non deprivation: the resources obtained by the process cannot be forcibly deprived until they are used up at the end of the year. Circular waiting condition: a circular waiting resource relationship is formed between several processes.
Deadlock prevention
Prevent deadlock by breaking one of the four necessary conditions for deadlock generation;
Deadlock avoidance
Basic idea: the system dynamically checks the resource applications that each system can meet issued by the process, and determines whether to allocate resources according to the inspection results. If the system may deadlock after allocation, it will not be allocated, otherwise it will be allocated. This is a dynamic strategy to ensure that the system does not enter the deadlock state. If the operating system can ensure that all processes get all the resources they need in a limited time, the system is in a safe state, otherwise the system is unsafe.
Safety state means that if the system has all safety sequences {P1, P2,... Pn}, the system is in a safe state. A process sequence is safe. If the resources needed for each process PI (I > = 1 & & I < = n) in the future do not exceed the sum of the current remaining resources of the system and the current resources occupied by all processes PJ (J < I), the system will not deadlock if it is in a safe state.
Unsafe state: if there is no safety sequence, the system is in an unsafe state.
Banker's Algorithm is a famous algorithm to avoid Deadlock. The so-called banker algorithm means to see whether the system Deadlock will be caused after resource allocation before resource allocation. If it will Deadlock, it will not be allocated, otherwise it will be allocated.
According to the idea of banker algorithm, when the process requests resources, the system will allocate system resources according to the following principles:
(1) When the maximum demand of a process for resources does not exceed the number of resources in the system, the process can be accepted.
(2) The process can request resources by stages when the total number of requests cannot exceed the maximum demand.
(3) When the existing resources of the system cannot meet the number of resources needed by the process, the request for the process can be delayed, but the process can always get resources in a limited time.
(4) When the existing resources of the system can meet the number of resources still needed by the process, it must be tested whether the existing resources of the system can meet the maximum number of resources still needed by the process. If they can, the resources will be allocated according to the current application amount, otherwise the allocation will be postponed.
Basic methods to solve deadlock
Deadlock prevention: One time allocation of resources: (destroy request and hold condition) Deprivable resources: that is, when new resources of a process are not met, the occupied resources are released (destroying the inalienable conditions) Orderly resource allocation method: the system assigns a number to each type of resources, and each process requests resources in the order of increasing the number, while the release is the opposite (destroying the loop waiting condition) Avoid deadlock: Several strategies to prevent deadlock will seriously damage the system performance. Therefore, when avoiding deadlock, we should impose weak restrictions to obtain satisfactory system performance. Because in the strategy of avoiding deadlock, the process is allowed to apply for resources dynamically. Therefore, the system calculates the security of resource allocation in advance before resource allocation. If this allocation will not cause the system to enter an unsafe state, allocate resources to the process; Otherwise, the process waits. The most representative deadlock avoidance algorithm is banker algorithm. Detection algorithm : First, specify a unique number for each process and each resource; Then establish a resource allocation table and a process waiting table, for example: Deadlock Relieving : When a process deadlock is found, it should be released from the deadlock state immediately. The commonly used methods are: To deprive a process of sufficient resources from a deadlock state and to deallocate it from other processes; Undo process: you can directly undo the deadlock process or undo the process with the least cost until there are enough resources available and the deadlock state.Until elimination; The so-called cost refers to priority, operation cost, importance and value of process, etc.