1 Linux Kernel Implements Namespace Creation
1.1 ip netns command
-
You can use the ip netns command to complete various operations on the Network Namespace. The ip netns command comes from the iproute installation package, which is usually installed by default. If not, install it yourself.
-
Note: The ip netns command requires sudo privileges when modifying network configuration.
-
You can use the ip netns command to complete operations on the Network Namespace and view command help information through ip netns help
[root@Docker ~]# ip netns help Usage: ip netns list ip netns add NAME ip netns attach NAME PID ip netns set NAME NETNSID ip [-all] netns delete [NAME] ip netns identify [PID] ip netns pids NAME ip [-all] netns exec [NAME] cmd ... ip netns monitor ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT] NETNSID := auto | POSITIVE-INT
By default, there is no Network Namespace on Linux, so the ip netns list command does not return any information.
[root@Docker ~]# ip netns list [root@Docker ~]#
1.2 Create Network Namespace
//Create a namespace named ns0 by command [root@Docker ~]# ip netns add ns0 [root@Docker ~]# ip netns list ns0
The newly created Network Namespace appears in the / var/run/netns/directory. If a namespace with the same name already exists, the command will report Cannot create namespace file'/var/run/netns/ns0': File exists error message
[root@Docker ~]# ls /var/run/netns/ ns0 [root@Docker ~]# ip netns add ns0 Cannot create namespace file "/var/run/netns/ns0": File exists
For each Network Namespace, it will have its own network-related resources such as network cards, routing tables, ARP tables, iptables, and so on.
1.3 Operation Network Namespace
The ip command provides ip netns exec subcommands to execute commands in the corresponding Network Namespace
//View the network card information for the newly created Network Namespace [root@Docker ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 //You can see that a lo loopback network card is created by default in the newly created Network Namespace and is turned off. At this point, try to ping the lo loopback network card, which prompts Network is unreachable [root@Docker ~]# ip netns exec ns0 ping 127.0.0.1 connect: Network unreachable Enable with the following command lo Looback adaptor [root@Docker ~]# ip netns exec ns0 ip link set lo up [root@Docker ~]# ip netns exec ns0 ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever [root@Docker ~]# ip netns exec ns0 ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.042 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.020 ms
1.4 Transfer equipment
- We can transfer devices (such as veth) between different Network Namespaces. Since a device can only belong to one Network Namespace, it will not be visible within the Network Namespace after the transfer
- Among them, veth device is a transferable device, and many other devices (such as lo, vxlan, ppp, bridge, etc.) are not transferable
1.5 veth pair
- A new device called veth is used in Linux container s, built specifically for containers. veth is short for Virtual ETHernet by name and simply forwards packets sent from one network namespace to another. The veth device is paired, one inside the container and the other outside the container, which is visible on the real machine
- VETH devices always appear in pairs, and data sent to one end of the request always appears as request acceptance from the other end. Once created and configured correctly, data is entered to one end of the system, and VETH changes the direction of the data and feeds it into the kernel network subsystem to complete data injection, which can be read at the other end. (Namespace, where data from RX on either end of a Veth device is sent as TX on the other end.) Veth works in the L2 data link layer, and veth-pair devices do not alter the contents of packets in the process of forwarding packets
- veth pair was introduced to communicate directly between different Network Namespaces, which can be used to connect two Network Namespaces directly
veth equipment features
- veth, like any other network device, connects to the kernel protocol stack at one end
- The veth device appears in pairs, with the other end connected to each other
- When a device receives a data send request from the protocol stack, it sends the data to another device
1.6 Create veth pair
[root@Docker ~]# ip a | grep veth [root@Docker ~]# ip link add type veth [root@Docker ~]# ip a | grep veth 4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 [root@Docker ~]# ip a ...... 4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0e:1b:68:0b:cb:e5 brd ff:ff:ff:ff:ff:ff 5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c2:f0:a8:2c:07:78 brd ff:ff:ff:ff:ff:ff
As you can see, a pair of Veth pairs has been added to the system to connect the two virtual network cards veth0 and veth1, which are not enabled for Veth pairs at this time.
1.7 Implement Network Namespace Communication
Here we use veth pair to communicate between two different Network Namespaces. Just now we have created a Network Namespace named ns0, and next we create an Information Network Namespace named ns1
[root@Docker ~]# ip netns add ns1 [root@Docker ~]# ip netns list ns1 ns0 //Add veth0 to ns0 and veth1 to ns1 [root@Docker ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever [root@Docker ~]# ip netns exec ns1 ip a #The newly created lo Loopback Network Card for ns1 is now closed 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 [root@Docker ~]# ip link set veth0 netns ns0 [root@Docker ~]# ip link set veth1 netns ns1 [root@Docker ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: veth0@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0e:1b:68:0b:cb:e5 brd ff:ff:ff:ff:ff:ff link-netns ns1 [root@Docker ~]# ip netns exec ns1 ip a 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 5: veth1@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c2:f0:a8:2c:07:78 brd ff:ff:ff:ff:ff:ff link-netns ns0 //Configure and enable ip addresses for each pair of Veth pairs [root@Docker ~]# ip netns exec ns0 ip link set veth0 up [root@Docker ~]# ip netns exec ns0 ip addr add 192.168.25.100/24 dev veth0 [root@Docker ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0e:1b:68:0b:cb:e5 brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 192.168.25.100/24 scope global veth0 valid_lft forever preferred_lft forever inet6 fe80::c1b:68ff:fe0b:cbe5/64 scope link valid_lft forever preferred_lft forever [root@Docker ~]# ip netns exec ns1 ip link set veth1 up [root@Docker ~]# ip netns exec ns1 ip link set lo up #Start lo Loopback Network Card [root@Docker ~]# ip netns exec ns1 ip addr add 192.168.25.200/24 dev veth1 [root@Docker ~]# ip netns exec ns1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether c2:f0:a8:2c:07:78 brd ff:ff:ff:ff:ff:ff link-netns ns0 inet 192.168.25.200/24 scope global veth1 valid_lft forever preferred_lft forever inet6 fe80::c0f0:a8ff:fe2c:778/64 scope link valid_lft forever preferred_lft forever
As you can see above, we have successfully enabled this veth pair and assigned the corresponding ip address to each veth device. We tried to access the ip address in ns0 in ns1
[root@Docker ~]# ip netns exec ns1 ping 192.168.25.100 PING 192.168.25.100 (192.168.25.100) 56(84) bytes of data. #You can see that veth pair successfully implemented network interaction between two different Network Namespace s 64 bytes from 192.168.25.100: icmp_seq=1 ttl=64 time=0.027 ms 64 bytes from 192.168.25.100: icmp_seq=2 ttl=64 time=0.027 ms 64 bytes from 192.168.25.100: icmp_seq=3 ttl=64 time=0.027 ms 64 bytes from 192.168.25.100: icmp_seq=4 ttl=64 time=0.040 ms
1.8 veth device rename
[root@Docker ~]# ip netns exec ns0 ip a ....... 4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0e:1b:68:0b:cb:e5 brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 192.168.25.100/24 scope global veth0 valid_lft forever preferred_lft forever inet6 fe80::c1b:68ff:fe0b:cbe5/64 scope link valid_lft forever preferred_lft forever [root@Docker ~]# ip netns exec ns0 ip link set veth0 name eth0 RTNETLINK answers: Device or resource busy //Tip busy [root@Docker ~]# ip netns exec ns0 ip link set veth0 down //Need to stop first [root@Docker ~]# ip netns exec ns0 ip link set veth0 name eth0 //modify name [root@Docker ~]# ip netns exec ns0 ip link set eth0 up //on, opened with modified name [root@Docker ~]# ip netns exec ns0 ip a //view ........ 4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0e:1b:68:0b:cb:e5 brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 192.168.25.100/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::c1b:68ff:fe0b:cbe5/64 scope link valid_lft forever preferred_lft forever
Two or four network mode configurations
2.1 bridge mode configuration
[root@Docker ~]# docker images centos latest 5d0da3dc9764 2 months ago 231MB [root@Docker ~]# docker run -it --name jj --rm centos [root@2847c23de01e /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever # Adding the--network bridge option when creating containers will have the same effect as not--network work because the bridge mode is docker's default network mode [root@Docker ~]# docker run -it --name jj --rm --network bridge centos [root@7ba4fc65cb89 /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
2.2 none mode configuration
[root@Docker ~]# docker run -it --name jj --rm --network none centos [root@fc57f21d1fe0 /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
2.3 container Mode Configuration
//Start the first container [root@Docker ~]# docker run -it --name jj --rm busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever //Start the second container [root@Docker ~]# docker run -it --name zz --rm busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
You can see that the container IP address named zz is 10.0.0.3, which is different from the IP address of the first container, that is, there is no shared network. If we change the way the second container is started, we can make the container IP named zz consistent with the jj container IP, that is, share the IP, but do not share the file system.
[root@Docker ~]# docker run -it --name zz --rm --network container:jj busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
Here we create a directory on the jj container
/ # mkdir /tmp/zj / # ls /tmp/ zj
Check the / tmp directory on the zz container and you will see that there is no such directory because the file system is isolated and simply shares the network.
/ # ls /tmp/ / #
Deploy a site on a zz container
/ # echo "perfect world" > /tmp/index.html / # ls /tmp/ index.html / # httpd -h /tmp/ / # netstat -antl Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 :::80 :::* LISTEN
Access this site with a local address on the jj container
/ # wget -O - -q 127.0.0.2:80 perfect world
2.4 host mode configuration
//Starting a container directly indicates that the mode is host [root@Docker ~]# docker run -it --name zz --rm --network host busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel qlen 1000 link/ether 00:0c:29:1b:44:be brd ff:ff:ff:ff:ff:ff inet 192.168.25.148/24 brd 192.168.25.255 scope global dynamic noprefixroute ens33 valid_lft 1318sec preferred_lft 1318sec inet6 fe80::2e0f:34ad:7328:bbf9/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue link/ether 02:42:84:0b:44:01 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:84ff:fe0b:4401/64 scope link valid_lft forever preferred_lft forever [root@Docker ~]# ip a //host ip 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:1b:44:be brd ff:ff:ff:ff:ff:ff inet 192.168.25.148/24 brd 192.168.25.255 scope global dynamic noprefixroute ens33 valid_lft 1293sec preferred_lft 1293sec inet6 fe80::2e0f:34ad:7328:bbf9/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:84:0b:44:01 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:84ff:fe0b:4401/64 scope link valid_lft forever preferred_lft forever
At this point, if we launch an http site in this container, we can access the site in this container directly in the browser using the host IP.
/ # echo "perfect world" > /tmp/index.html / # ls /tmp/ index.html / # httpd -h /tmp/
3 Common operations of containers
3.1 View the host name of the container
[root@Docker ~]# docker run -it --name jj --rm busybox / # hostname 74f55268e3cf //We can see that the name here is the same as the container ID you are running [root@Docker ~]# docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 74f55268e3cf busybox "sh" 44 seconds ago Up 43 seconds jj
3.2 Inject host name at container startup
[root@Docker ~]# docker run -it --name jj --rm --hostname node1 busybox / # hostname node1 / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 node1 # Host name-to-IP mapping is automatically created when a host name is injected / # cat /etc/resolv.conf # Generated by NetworkManager search localdomain nameserver 192.168.25.2 # DNS is also automatically configured as the host's DNS / # ping www.baidu.com PING www.baidu.com (36.152.44.96): 56 data bytes 64 bytes from 36.152.44.96: seq=0 ttl=127 time=65.083 ms 64 bytes from 36.152.44.96: seq=1 ttl=127 time=62.052 ms 64 bytes from 36.152.44.96: seq=2 ttl=127 time=182.341 ms 64 bytes from 36.152.44.96: seq=3 ttl=127 time=60.852 ms
3.3 Manually specify the DNS to be used by the container
[root@Docker ~]# docker run -it --name jj --rm --hostname node1 --dns 8.8.8.8 busybox / # cat /etc/resolv.conf search localdomain nameserver 8.8.8.8
3.4 Manually inject host name to IP address mapping into/etc/hosts file
[root@Docker ~]# docker run -it --name jj --rm --hostname node1 --add-host www.jj.com:8.8.8.8 busybox / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 8.8.8.8 www.jj.com 172.17.0.2 node1
3.5 Open container port
There is a -p option to execute the docker run, which maps the application ports in the container to the host machine so that the external host can access the application in the container by accessing one of the host ports.
The -p option can be used multiple times and must expose ports that the container is actually listening on.
Format for using the -p option
- -p <containerPort>Maps the specified container port to a dynamic port for all addresses of the host
[root@Docker ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE zhaojie10/nginx v0.1 7ded9a3d20e4 46 hours ago 550MB busybox latest d23834f29b38 5 days ago 1.24MB centos latest 5d0da3dc9764 2 months ago 231MB [root@Docker ~]# docker run -it --name jj --rm -p 80 7ded9a3d20e4 [root@82e92a29d0bf /]# /usr/local/nginx/sbin/nginx [root@82e92a29d0bf /]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:80 0.0.0.0:* //Dynamic ports are random ports, and specific mapping results can be viewed using the docker port command [root@Docker ~]# docker port jj 80/tcp -> 0.0.0.0:49153 80/tcp -> :::49153
Let's visit this port on the browser to see if we can access the site in the container
//The iptables firewall rules are automatically generated with the creation of the container and deleted with the deletion of the container. [root@Docker ~]# Iptables-t nat-nvL //View iptables rules ...... Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 2 96 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80
- -p <hostPort>: <containerPort>Map container ports to specified host ports
[root@Docker ~]# docker run -it --name jj --rm -p 80:80 7ded9a3d20e4 [root@80d3b261bcb6 /]# /usr/local/nginx/sbin/nginx [root@80d3b261bcb6 /]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:80 0.0.0.0:* //View port mappings on another terminal [root@Docker ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 80d3b261bcb6 7ded9a3d20e4 "/bin/bash" 47 seconds ago Up 46 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp jj
Test with the host's IP website, note that port 80 is not required here
- -p <ip>: <containerPort>Maps the specified container port to the dynamic port of the host specified IP
[root@Docker ~]# docker run -it --name jj --rm -p 192.168.25.148::80 7ded9a3d20e4 [root@1594c11b18c5 /]# /usr/local/nginx/sbin/nginx [root@1594c11b18c5 /]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:80 0.0.0.0:* //View port mappings on another terminal [root@Docker ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1594c11b18c5 7ded9a3d20e4 "/bin/bash" About a minute ago Up About a minute 192.168.25.148:49153->80/tcp jj [root@Docker ~]# docker port jj 80/tcp -> 192.168.25.148:49153
- -p <ip>: <hostPort>: <containerPort>Map the specified container port to the port specified by the host
[root@Docker ~]# docker run -it --name jj --rm -p 192.168.25.148:80:80 7ded9a3d20e4 [root@1c6f24a368a2 /]# /usr/local/nginx/sbin/nginx [root@1c6f24a368a2 /]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:80 0.0.0.0:* //View port mappings on another terminal [root@Docker ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1c6f24a368a2 7ded9a3d20e4 "/bin/bash" 45 seconds ago Up 44 seconds 192.168.25.148:80->80/tcp jj [root@Docker ~]# docker port jj 80/tcp -> 192.168.25.148:80
4 Customize network attribute information for docker0 Bridge
Defining network attribute information for docker0 bridge requires modification of/etc/docker/daemon.json configuration file
{ "bip": "192.168.1.5/24", "fixed-cidr": "192.168.1.5/25", "fixed-cidr-v6": "2001:db8::/64", "mtu": 1500, "default-gateway": "10.20.1.1", "default-gateway-v6": "2001:db8:abcd::89", "dns": ["10.20.1.2","10.20.1.3"] }
[root@Docker ~]# cat /etc/docker/daemon.json { "bip": "192.168.1.5/24", #The core option is bip, meaning bridge ip, which specifies the IP address of the docker0 bridge itself. Other options can be calculated from this address. "dns":["10.20.1.2","10.20.1.3"], "registry-mirrors": ["https://xj3hc284.mirror.aliyuncs.com "] #Mirror Accelerator } [root@Docker ~]# systemctl daemon-reload [root@Docker ~]# systemctl restart docker [root@Docker ~]# docker run -it --name jj --rm 7ded9a3d20e4 [root@4c31259e8342 /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 38: eth0@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:c0:a8:01:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.1.1/24 brd 192.168.1.255 scope global eth0 valid_lft forever preferred_lft forever
4.1 docker remote connection
Docker is C/S architecture, service side is docker daemon, client side is docker.service. By default, it does not listen on any port, but can only operate locally using docker client or using Docker API. The following settings are required to support remote client access (unsafe, because a listening port is open, anyone can connect to the docker daemon server remotely for operation)
The C/S of the dockerd daemon, which by default only listens for addresses in Unix Socket format (/var/run/docker.sock), needs to modify the/etc/docker/daemon.json configuration file if you want to use a TCP socket, add the following, and then restart the docker service
"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
Pass the'-H|-host'option directly to dockerd on the client to specify which host docker container you want to control
docker -H 192.168.10.145:2375 ps
4.2 docker Create Custom Bridge
//Create an additional custom bridge, different from docker0 [root@Docker ~]# docker network ls NETWORK ID NAME DRIVER SCOPE e1a6dd710dc7 bridge bridge local f07e7613bacb host host local d951c3cc12d5 none null local [root@Docker ~]# docker network create -d bridge --subnet "192.168.24.0/24" --gateway "192.168.24.1" br0 718edc8abf165c27d07aa6c1fcecf1a627c8f477f9ef02b39a9480db3c08d228 [root@Docker ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 718edc8abf16 br0 bridge local e1a6dd710dc7 bridge bridge local f07e7613bacb host host local d951c3cc12d5 none null local //Create containers using newly created custom bridges [root@Docker ~]# docker run -it --name jj --rm --network br0 busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 41: eth0@if42: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:c0:a8:18:02 brd ff:ff:ff:ff:ff:ff inet 192.168.24.2/24 brd 192.168.24.255 scope global eth0 valid_lft forever preferred_lft forever / # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.24.1 0.0.0.0 UG 0 0 0 eth0 192.168.24.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0