By default, a container has no resource constraints and can use all the resources scheduled by the kernel.Docke provides parameters to control the memory, CPU, and block IO used by the container when it is started.
Only memory and CPU can be controlled.
Memory
Memory is an incompressible resource
OOME
In Linxu systems, if the kernel detects that the host machine does not have enough memory to invoke some important functions of the execution system, an OOME(Out Of Memory Exception: Memory Exception) is thrown to kill some processes to free memory.
Once OOME occurs, any process, including docker daemon, can be killed.Docker specifically adjusted the OOM priority of docker daemon to prevent it from being killed, but the container's priority was not adjusted.When memory is low, the kernel will score each process according to its own scheduling algorithm, then kill the process with the highest score to free memory.
You can specify the--oom-score-adj parameter when docker run (default is 0).This is the priority of the container being killed, which affects the rating. Higher values are more likely to be killed.This parameter only affects the final score, the priority killing process is to see the score, small parameters may still be calculated after the highest score, will still be priority killing.
You can specify the -oom-kill-disable=true parameter, specifying certain important containers that are prohibited from being killed by OOM.
--oom-kill-disable Disable OOM Killer --oom-score-adj int Tune host's OOM preferences (-1000 to 1000)
Memory Limit
option | describe |
---|---|
-m,--memory | Memory limit, format is number plus unit, unit can be b,k,m,g.Minimum 4M |
--memory-swap | Total memory + swap partition size limit.The format is the same as above.Must be larger than -m setting |
--memory-swappiness | By default, the host can swap out the anonymous page used by the container, and you can set a value between 0 and 100 to represent the percentage of swaps allowed |
--memory-reservation | Set a soft limit for memory usage and perform an OOM operation if docker finds that the host is out of memory.This value must be less than the value set by --memory |
--kernel-memory | kernel memory size that container can use, minimum 4m |
--oom-kill-disable | Whether to kill the container while running OOM.This option can only be set to false if -m is set, otherwise the container will run out of host memory and cause the host application to be killed |
--memory-swap parameter
This parameter takes effect in conjunction with -m. The description in the table is simple and general.
General usage: larger than -m, total memory + swap partition size limit
Disabling swap: is as large as -m, so the limit for--memory and--memory-swap is as large, and the available swap resource is 0, which is equivalent to disabling.
Default setting: set to 0 or no, if swap is enabled on the host (Docker Host), the available swap for the container is twice the memory limit.
Unlimited: Set to -1, if the host (Docker Host) has swap enabled, the container can use all swap resources of the host.
Using the free command inside a container, the swap space you see does not reflect these limitations and has no reference value.
CPU
The CPU is a compressable resource.
By default, each container can use all the CPU resources on the host.The resource scheduling algorithm used by most systems is CFS (Full Fair Scheduler), which dispatches each worker process fairly.Processes can be divided into two categories: CPU-intensive (low priority) and IO-intensive (high priority).The system kernel monitors system processes in real time and adjusts the priority of a process when it takes too long to consume CPU resources.
realtime scheduling is also supported after docker 1.13.
There are three CPU resource allocation strategies:
- Compression proportional distribution
- Limit up to a few cores
- Limit which or which cores can be used only
option | describe | |
---|---|---|
-c, --cpu-shares int | The cpu resources are provided to a group of containers, where the containers use the cpu resources proportionally. When the containers are idle, the cpu resources are occupied by heavily loaded containers (compressed proportional allocation), and when idle is running, the cpu resources are allocated to other containers | |
--cpus decimal | Specify the number of cores of the cpu, which directly limits the CPU resources available to the container | |
--cpuset-cpus string | Specifies on which CPU core the container can run only (binding cpu); the core uses 0,1,2,3 numbers |
CPU Share
The parameter docker sets CPU share for the container is -c, --cpu-shares, and its value is an integer.
The docker allows the user to set a number for each container to represent the container's CPU share, which by default is 1024 for each container.When multiple containers are running on the host, each container consumes a proportion of its share of the total CPU time.For example, if there are two containers on the host that have been using CPUs all the time (for simplicity of understanding, regardless of other processes on the host), both containers have a CPU share of 1024, and if one container has a share of 512, the CPU usage of both containers is 67% and 3, respectively.3%; if you delete a container with share 1024, the CPU usage of the remaining containers will be 100%.
In summary, in this case, the docker dynamically adjusts the proportion of CPU usage per container based on the containers and processes running on the host.This has the advantage of ensuring that the CPU is as running as possible, making full use of CPU resources, and that all containers are relatively fair; the disadvantage is that it is not possible to specify a certain value for the container to use the CPU.
Number of CPU cores
Since version 1.13, docker has provided the--cpus parameter to limit the number of CPU cores a container can use.This feature allows us to set container CPU usage more accurately and is a more understandable and therefore more common means.
--cpus is followed by a floating point number, which represents the maximum number of cores used by the container, and can be precise to two decimal places, meaning that the container can use a minimum of 0.01 core CPU.For example, we can limit containers to 1.5-core CPUs.
If the--cpus value set is greater than the number of CPU cores of the host machine, the docker will directly error.
If multiple containers have--cpus set, and their sum exceeds the number of CPU cores of the host, the containers will not fail or exit, and they will compete for CPUs between them, depending on the host's operation and the container's CPU share value.That is to say--cpus only guarantees the maximum number of CPUs a container can use when it has enough CPU resources, and docker does not guarantee that a container can use so many CPUs in any case (because it is simply impossible).
CPU Specify Core
Docker allows scheduling to limit which CPU the container is running on.The--cpuset-cpus parameter allows the container to run only on one or more cores.
The -- cpuset-cpus, - CPUs parameters can be used with -c, --cpu-shares, limiting containers to only run on certain CPU cores and configuring usage.
Limiting which cores the container runs on is not a good practice because it requires prior knowledge of how many CPU cores are on the host and is very flexible.This is not generally recommended in production unless there are specific requirements.
Other CPU parameters
option | describe |
---|---|
--cpu-period int | Specifies the CFS schedule cycle, typically used with --cpu-quota.By default, the cycle is one second, expressed in microseconds, and the default value is generally used.The--cpus flag is recommended for version 1.13 or later. |
--cpu-quota int | The CPU time quota for a cycle of containers in CFS scheduling, that is, the CPU time (microseconds) available to each--cpu-period cycle container, cpu-quota/cpu-period.The--cpus flag is recommended for version 1.13 or later. |
Pressure test
Presentation of resource constraints
Query resources on host
lscpu and free commands are used here:
[root@Docker ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 1 On-line CPU(s) list: 0 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 60 Model name: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz Stepping: 3 CPU MHz: 3999.996 BogoMIPS: 7999.99 Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm ssbd ibrs ibpb stibp fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt spec_ctrl intel_stibp flush_l1d [root@Docker ~]# free -h total used free shared buff/cache available Mem: 936M 260M 340M 6.7M 334M 592M Swap: 1.6G 0B 1.6G [root@Docker ~]#
Download Mirror
You can search docker hub for stress.
Download the mirror and run View Help:
[root@Docker ~]# docker pull lorel/docker-stress-ng
[root@Docker ~]# docker run -it --rm lorel/docker-stress-ng stress-ng, version 0.03.11 Usage: stress-ng [OPTION [ARG]] --h, --help show help ......ellipsis...... Example: stress-ng --cpu 8 --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 10s Note: Sizes can be suffixed with B,K,M,G and times with s,m,h,d,y [root@Docker ~]#
Main command parameters:
- --h, --help:The default startup container is this command parameter
- -c N, --cpu N: Start N sub-processes to measure CPU
- -m N, --vm N: Start N processes to pressure memory
- --vm-bytes N: How much memory is used per subprocess (default 256MB)
Test memory limit
View the description of memory-related parameters in lorel/docker-stress-ng:
-m N, --vm N start N workers spinning on anonymous mmap --vm-bytes N allocate N bytes per vm worker (default 256MB)
The default memory per worker is 256MB, which remains the default.Then specify--vm, open two workers, and limit the container's memory to 256MB to start the container:
[root@Docker ~]# docker run --name stress1 -it --rm -m 256m lorel/docker-stress-ng --vm 2 stress-ng: info: [1] defaulting to a 86400 second run per stressor stress-ng: info: [1] dispatching hogs: 2 vm
This terminal is already occupied, and another terminal uses the docker top command to view the processes running inside the container:
[root@Docker ~]# docker top stress1 UID PID PPID C STIME TTY TIME CMD root 5922 5907 0 21:06 pts/0 00:00:00 /usr/bin/stress-ng --vm 2 root 6044 5922 0 21:06 pts/0 00:00:00 /usr/bin/stress-ng --vm 2 root 6045 5922 0 21:06 pts/0 00:00:00 /usr/bin/stress-ng --vm 2 root 6086 6044 13 21:06 pts/0 00:00:00 /usr/bin/stress-ng --vm 2 root 6097 6045 47 21:06 pts/0 00:00:00 /usr/bin/stress-ng --vm 2 [root@Docker ~]#
Here you can see the PID and PID, there are five processes, one parent process creates two sub-processes, and each sub-process creates a process.
You can also use the command docker stats to see how containers'resources are being used in real time:
$ docker stats CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 626f38c4a4ad stress1 18.23% 256MiB / 256MiB 100.00% 656B / 0B 17.7MB / 9.42GB 5
This is a real-time refresh.
Test CPU Limit
Limit the container to a maximum of 2 cores, then turn on 8 CPU s at the same time for pressure measurement using the following commands:
docker run -it --rm --cpus 2 lorel/docker-stress-ng --cpu 8
Limited to 0.5 cores, turn on 4 CPU s for pressure measurement:
[root@Docker ~]# docker run --name stress2 -it --rm --cpus 0.5 lorel/docker-stress-ng --cpu 4 stress-ng: info: [1] defaulting to a 86400 second run per stressor stress-ng: info: [1] dispatching hogs: 4 cpu
Start a new terminal that uses the docker top command to see the processes running inside the container:
[root@Docker ~]# docker top stress2 UID PID PPID C STIME TTY TIME CMD root 7198 7184 0 22:35 pts/0 00:00:00 /usr/bin/stress-ng --cpu 4 root 7230 7198 12 22:35 pts/0 00:00:02 /usr/bin/stress-ng --cpu 4 root 7231 7198 12 22:35 pts/0 00:00:02 /usr/bin/stress-ng --cpu 4 root 7232 7198 12 22:35 pts/0 00:00:02 /usr/bin/stress-ng --cpu 4 root 7233 7198 12 22:35 pts/0 00:00:02 /usr/bin/stress-ng --cpu 4 [root@Docker ~]#
A parent process with four child processes created.
Then use the docker stats command to view resource usage:
$ docker stats CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 14a341dd23d1 stress2 50.02% 13.75MiB / 908.2MiB 1.51% 656B / 0B 0B / 0B 5
Since the limit is 0.5 cores, it is unlikely to exceed 50%.
Test CPU Share
Open three containers and specify different--cpu-share parameters, defaulting to 1024 if not specified:
[root@Docker ~]# docker run --name stress3.1 -itd --rm --cpu-shares 512 lorel/docker-stress-ng --cpu 4 800d756f76ca4cf20af9fa726349f25e29bc57028e3a1cb738906a68a87dcec4 [root@Docker ~]# docker run --name stress3.2 -itd --rm lorel/docker-stress-ng --cpu 4 4b88007191812b239592373f7de837c25f795877d314ae57943b5410074c6049 [root@Docker ~]# docker run --name stress3.3 -itd --rm --cpu-shares 2048 lorel/docker-stress-ng --cpu 4 8f103395b6ac93d337594fdd1db289b6462e01c3a208dcd3788332458ec03b98 [root@Docker ~]#
View CPU usage for three containers:
$ docker stats CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 800d756f76ca stress3.1 14.18% 14.53MiB / 908.2MiB 1.60% 656B / 0B 0B / 0B 5 4b8800719181 stress3.2 28.60% 15.78MiB / 908.2MiB 1.74% 656B / 0B 0B / 0B 5 8f103395b6ac stress3.3 56.84% 15.38MiB / 908.2MiB 1.69% 656B / 0B 0B / 0B 5
Occupancy is basically 1/2/4, which meets expectations.