Catalog
1. Establishment of private warehouses
3. Configure docker engine terminal
4. Mount containers and Daemons
5. Mirror Upload Mirror Warehouse
2. Test cpu and memory usage with stress test tool
3.Cgroups-Priority/Weight Restrictions
2. Query Container's Resources
5. Mixed use of cpu quota control parameters
8. Restrictions of bps and iops
1. Establishment of private warehouses
1. Download Private Warehouse
[root@docker ~]# mkdir docker [root@docker docker]# cd docker [root@docker docker]# docker pull registry #Download Private Character Store Using default tag: latest latest: Pulling from library/registry 79e9f2f55bf5: Pull complete 0d96da54f60b: Pull complete 5b27040df4a2: Pull complete e2ead8259a04: Pull complete 3790aef225b9: Pull complete Digest: sha256:169211e20e2f2d5d115674681eb79d21a217b296b43374b8e39f97fcf866b375 Status: Downloaded newer image for registry:latest docker.io/library/registry:latest
2. Download Mirror
[root@docker docker]# docker pull nginx #Download required mirrors Using default tag: latest latest: Pulling from library/nginx e5ae68f74026: Pull complete 21e0df283cd6: Pull complete ed835de16acd: Pull complete 881ff011f1c9: Pull complete 77700c52c969: Pull complete 44be98c0fab6: Pull complete Digest: sha256:4a49b1fbd5f544755121dee04f7f717416c21ae1bf5ef862aa34fbffbb9e434f Status: Downloaded newer image for nginx:latest docker.io/library/nginx:latest [root@docker docker]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest f652ca386ed1 19 hours ago 141MB registry latest b8604a3fe854 2 weeks ago 26.2MB
3. Configure docker engine terminal
[root@docker docker]# vim /etc/docker/daemon.json #Create a new.json file { "insecure-registries":["192.168.150.10:5000"], "registry-mirrors": ["https://t466r8qg.mirror.aliyuncs.com"] } [root@docker docker]# systemctl restart docker.service #Restart Service
4. Mount containers and Daemons
[root@docker docker]# docker run -d -p 5000:5000 -v /data/registry:/tmp/registry registry 63c1414bcc8b338210b104bf505f6670a2023e7c0d2d55a945aeae2672883424 [root@docker docker]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 63c1414bcc8b registry "/entrypoint.sh /etc..." 54 seconds ago Up 53 seconds 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp competent_kowalevski
5. Mirror Upload Mirror Warehouse
[root@docker docker]# docker push 192.168.150.10:5000/nginx Using default tag: latest The push refers to repository [192.168.150.10:5000/nginx] 2bed47a66c07: Pushed 82caad489ad7: Pushed d3e1dca44e82: Pushed c9fcd9c6ced8: Pushed 0664b7821b60: Pushed 9321ff862abb: Pushed latest: digest: sha256:4424e31f2c366108433ecca7890ad527b243361577180dfd9a5bb36e828abf47 size: 1570 [root@docker docker]# curl -XGET http://192.168.150.10:5000/v2/_catalog #View Mirrors with Characters {"repositories":["nginx"]}
6. Private Warehouse Download
[root@docker docker]# docker pull 192.168.150.10:5000/nginx Using default tag: latest latest: Pulling from nginx Digest: sha256:4424e31f2c366108433ecca7890ad527b243361577180dfd9a5bb36e828abf47 Status: Image is up to date for 192.168.150.10:5000/nginx:latest 192.168.150.10:5000/nginx:latest
2. Cgroup Resources
Docker uses Cgroup to control the resource quotas used by containers, including CPU, memory, and disk. Docker basically covers common resource quotas and usage control.
Cgroup is the abbreviation of ControlGroups and is a mechanism provided by the Linux kernel to limit, record, and isolate the physical resources used by process groups (such as CPU, memory, disk I0, and so on)
1.CPU Usage Control
cpu cycle: 1s is the law of a cycle, the parameter value is generally 100000 (CPU unit of measure is seconds)
If 20% of the cpu usage needs to be allocated to this container, the parameter needs to be set to 2000, which is equivalent to 0.2s allocated to this container per cycle. cpu can only be occupied by one process at a time
2. Test cpu and memory usage with stress test tool
[root@docker docker]# mkdir /opt/stress [root@docker docker]# cd stress [root@docker stress]# vim Dockerfile FROM centos:7 RUN yum -y install wget RUN wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo RUN yum -y install stress [root@docker-lnmp stress]# docker build -t centos:stress . [root@docker stress]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE centos stress 7037090ec672 14 seconds ago 541MB
3.Cgroups-Priority/Weight Restrictions
Containers only take effect when resources allocated by the container are scarce, that is, when there is a need to restrict the resources used by the container. Therefore, it is not possible to determine how many CPU resources are allocated to a container based solely on its cpu share. The result of resource allocation depends on the CPU allocation of other containers running at the same time and the running of processes in the container. You can set the priority/permission of the container to use the CPU through cpu share, such as starting two containers and running to see the percentage of CPU usage.
[root@docker stress]# docker run -tid --name cpu512 --cpu-shares 512 centos:stress stress -c 10 #Simulate 10 subfunction processes inside a container [root@docker stress]# docker stats #View container resource usage dynamically CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 234fd1982b27 cpu512 802.49% 336KiB / 3.683GiB 0.01% 656B / 0B 0B / 0B 11 [root@docker stress]# docker run -tid --name cpu1024 --cpu-shares 1024 centos:stress stress -c 10 #Open another container for comparison [root@docker stress]# docker stats #View container resource usage dynamically CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS a3d1c7297cdd cpu1024 545.16% 340KiB / 3.683GiB 0.01% 656B / 0B 0B / 0B 11 234fd1982b27 cpu512 252.19% 336KiB / 3.683GiB 0.01% 656B / 0B 0B / 0B 11
3. CPU Cycle Limitation
Docker provides CPU clock cycles to which the **- cpu-period, - cpu-quota** parameter control containers can allocate. The cpu-period and cpu-quota parameters are generally used in combination.
- cpu-period: Used to specify how long a container will reassign CPU usage
- cpu-quota: The container obtains the proportion of resources allocated; Used to specify the maximum amount of time that can be spent running the container during this cycle; Unlike'- cpu-shares', this configuration specifies an absolute value, and containers will never use more CPU resources than the configured value.
The units of cpu-period and cpu-quota are microseconds. μ s). The cpu-period has a minimum value of 1000 microseconds and a maximum value of 1 second (10^6) μ s), default value is 0.1 seconds (100000) μ s), cpu-quota defaults to -1, meaning no control (no restrictions)
1. Operation examples
For example: Container processes require 0.2 seconds per second of a single CPU. You can set the cpu-period to 1000000 (that is, 1 second) and the cpu-quota to 20000 (0.2 seconds).
Of course, in a multi-core situation, if you allow container processes to fully occupy two cPUs, you can set the cpu-period to 100000 (that is, 0.1d seconds) and the cpu-quota to 20000 (0.2 seconds).
[root@docker docker]# docker run -tid --cpu-period 100000 --cpu-quota 200000 centos:stress ad2286f200e2d7a12a6ed4493c85c4f60ef14d0cb8e1187d290f3b98d584f729 [root@docker docker]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ad2286f200e2 centos:stress "/bin/bash" 9 seconds ago Up 8 seconds upbeat_perlman [root@docker docker]# docker exec -it ad2286f200e2 bash #Enter Container
2. Query Container's Resources
[root@ad2286f200e2 /]# cd /sys/fs/cgroup/cpu [root@ad2286f200e2 cpu]# ls cgroup.clone_children cgroup.procs cpu.cfs_quota_us cpu.rt_runtime_us cpu.stat cpuacct.usage notify_on_release cgroup.event_control cpu.cfs_period_us cpu.rt_period_us cpu.shares cpuacct.stat cpuacct.usage_percpu tasks [root@ad2286f200e2 cpu]# cat cpu.cfs_period_us 100000 [root@ad2286f200e2 cpu]# cat cpu.cfs_quota_us 200000
4. CPU Core Control
For servers with multi-core CPUs, Docker can also control which CPU cores are used for container operation, using the --cpuset-cpus parameter. This is particularly useful for servers with multiple CPUs that can optimally configure containers that require high-performance computing.
1. Containers created
You can only use 0 or 1 cores. The CPU kernel of the resulting cgroup
[root@docker stress]# docker run -tid --name cpu1 --cpuset-cpus 0-1 centos:stress c4c808b2ee9b8bfc9ebdeb2bee77f569ac40fb0243cf01b7919fd045fb37f7ec [root@docker stress]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c4c808b2ee9b centos:stress "/bin/bash" About a minute ago Up About a minute cpu1
2. View Core Occupancy
Use the following command to bind the processes in the container to the CPU kernel for the purpose of binding the CPU kernel.
[root@docker stress]# docker exec -it cpu1 /bin/bash -c "stress -c 10 " stress: info: [26] dispatching hogs: 10 cpu, 0 io, 0 vm, 0 hdd [root@localhost stress]# top #Remember to press 1 to see the occupancy of each core (only cpu0 and cpu1 are working) Tasks: 227 total, 11 running, 216 sleeping, 0 stopped, 0 zombie %Cpu0 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
5. Mixed use of cpu quota control parameters
The cpuset-cpus parameter specifies that container A uses CPU kernel 0 and container B only uses CPU kernel 1. When only these two containers use the corresponding CPU cores on the host, they each consume all the kernel resources, and cpu-shares have no noticeable effect. The cpuset-cpus, cpuset-mems parameters are valid only on servers with multiple cores and memory nodes, and must match the actual physical configuration, otherwise the purpose of resource control cannot be achieved. In the case of multiple CPU cores in the system, the container CPU cores need to be set by the cpuset-cpus parameter to be easily tested.
[root@docker stress]# docker run -tid --name cpu4 --cpuset-cpus 1 --cpu-shares 512 centos:stress stress -c 1 ec1076a777829139d1246f7d1ac66c56d39aa73ed87fa9b273ff5b184a332a76 [root@docker stress]# top top - 19:41:56 up 5:59, 2 users, load average: 1.01, 3.49, 6.19 Tasks: 219 total, 2 running, 217 sleeping, 0 stopped, 0 zombie %Cpu0 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st [root@docker stress]# docker run -tid --name cpu5 --cpuset-cpus 3 --cpu-shares 1024 centos:stress stress -c 1 a2f9694e9a87e4bd9f3e1f176d02fcb8b2b26a6ee4463fef88486f6f33fc4784 [root@docker stress]# top top - 19:50:23 up 6:07, 2 users, load average: 1.53, 1.60, 4.06 Tasks: 223 total, 3 running, 220 sleeping, 0 stopped, 0 zombie %Cpu0 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS ef3f85210952 cpu5 33.89% 216KiB / 3.683GiB 0.01% 656B / 0B 0B / 0B 2 a2f9694e9a87 cpu4 67.04% 124KiB / 3.683GiB 0.00% 656B / 0B 0B / 0B 2
6. Memory Limit
Like the operating system, the memory available to the container consists of two parts: physical memory and swap.
Docker controls container memory usage through two sets of parameters:
- m or -- memory: Set memory usage limits, such as 100M, 1024M
- memory-swap: set memory + swap usage quota
#Execute the following command to allow the container to use up to 200M of memory and 300M of swap. [root@docker stress]# docker run -it -m 200M --memory-swap=300M centos:stress --vm 1 --vm-bytes 280M --vm1:Start a memory worker thread --vm-bytes 280M: Allocate 280 per thread M Memory [root@docker stress]# docker status #Open another terminal for dynamic viewing CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 2ea4735905de stoic_shockley 0.00% 408KiB / 200MiB 0.20% 656B / 0B 0B / 0B 1
[root@docker stress]# docker run -it centos:stress [root@docker stress]# docker status #Default unlimited CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS b408c6491ebe laughing_maxwell 0.00% 400KiB / 3.683GiB 0.01% 656B / 0B 0B / 0B 1
7. Limitations of Block IO
By default, all containers read and write to the disk equally, and the container blockI0 priority can be changed by setting the **- blkio-weight** parameter.
-blkio-weight is similar to -cpu-shares in that it sets a relative weight value, which defaults to 500.
In the following example, container A reads and writes disk IO twice as much as container B
[root@docker docker]# docker run -it --name container_A --blkio-weight 600 centos:stress [root@5438aaa49750 /]# cat/sys/fs/cgroup/blkio/blkio.weight 600 [root@docker docker]# docker run -it --name container_A --blkio-weight 300 centos:stress [root@5438aaa49750 /]# cat/sys/fs/cgroup/blkio/blkio.weight 300
8. Restrictions of bps and iops
Controlling the actual IO of the disk
bps is bytepersecond, the amount of data read and written per second.
iops is io per second, the number of IOs per second.
The bps and iops of the container can be controlled by the following parameters:
- device-read-bps, restrict the BPS of a device
- device-read-iops, limit the IOPs read from a device
- device-write-iops, limit writing IOPs for a device
Limit container write/dev/sda rate to 5MB/s
[ root@docker docker]# docker run -it --device-write-bps /dev/sda:5MB centos:stress [root@d42b2ccf5237 /]# dd if=/dev/zero of=test bs=1M count=10 oflag=direct #direct: disk 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 2.00132 s, 5.2 MB/s #Copy 10M with 2s at 5.2M/s
Limit container write/dev/sda rate to 10MB/s
[root@d4a13dd24f44 /]# docker run -it --device-write-bps /dev/sda:10MB centos:stress [root@d42b2ccf5237 /]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 9.95113 s, 10.5 MB/s #Copy 100M using 9.9s at 10.5M/s
Unlimited disk significantly faster
[root@docker ~]# docker run -it centos:stress [root@07ca4ac0fce2 /]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.0960668 s, 1.1 GB/s #Copy 100M using 0.09s at 1.1GB/s
Specify resource constraints when building mirrors
parameter | Detailed |
---|---|
–build-arg=[ ] | Setting variables at the time of image creation |
–cpu-shares | Setting cpu usage weights |
—cpu-period | Limit CPU CFS cycle |
–cpu-quota | Limit CPU CFS quota |
–cpuset-cpus | Specify the CPUid to use |
–cpuset-mems | Specify memory id used |
–disable-content-trust | Ignore checks and turn them on by default |
-f | Specify the Dockerfile path to use; |
– force-rm | Delete intermediate containers during mirroring settings |
–isolation | Using Container Isolation Technology |
–label=[ ] | Set Metadata for Mirroring |
-m: set maximum memory
- memory-swap: Set the maximum value of Swap to memory + swap,'-1'to unlimited swap
no-cache: The process of creating a mirror does not use a cache
- pull: try to update the new version of the mirror
- quiet,-q: quiet mode, only output mirror ID after success
- rm: Delete intermediate container after setting mirror successfully
- shm-size: set / dev/ shm size, default is 64M
- ulimit:ulimit configuration
- squash: Compress all operations in a Dockerfile into one layer
- tag, -t: The name and label of the image, usually in name:tag or name format; You can set multiple labels for a single image in one build
- network: default. Set network mode for RUN instructions during build