The Linux kernel implements namespace creation
ip netns command
You can use the ip netns command to complete various operations on the Network Namespace. The ip netns command comes from the iproute installation package, which is usually installed by default. If not, install it yourself.
Note: The ip netns command requires sudo privileges when modifying network configuration.
You can use the ip netns command to complete operations on the Network Namespace, and you can view the command help information through ip netns help:
[root@localhost ~]# ip netns help Usage: ip netns list ip netns add NAME ip netns attach NAME PID ip netns set NAME NETNSID ip [-all] netns delete [NAME] ip netns identify [PID] ip netns pids NAME ip [-all] netns exec [NAME] cmd ... ip netns monitor ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT] NETNSID := auto | POSITIVE-INT
By default, there is no Network Namespace on Linux, so the ip netns list command does not return any information.
Create Network Namespace
Create a namespace named ns0 by command:
[root@localhost ~]# ip netns list [root@localhost ~]# ip netns add ns0 [root@localhost ~]# ip netns list ns0
The newly created Network Namespace appears in the / var/run/netns/directory. If a namespace with the same name already exists, the command will report a Cannot create namespace file'/var/run/netns/ns0': File exists error.
[root@localhost ~]# ls /var/run/netns/ ns0 [root@localhost ~]# ip netns add ns0 Cannot create namespace file "/var/run/netns/ns0": File exists
For each Network Namespace, it will have its own independent network card, routing table, ARP table, iptables, and other network-related resources.
Operating Network Namespace
The ip command provides the ip netns exec subcommand to execute commands in the corresponding Network Namespace.
View the network card information for the newly created Network Namespace
[root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
You can see that a lo loopback network card is created by default in the newly created Network Namespace and is turned off. At this point, try to ping the lo loopback network card, which prompts Network is unreachable
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1 connect: Network unreachable
Enable the lo loopback network card with the following command:
//start-up [root@localhost ~]# ip netns exec ns0 ip link set lo up [root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever [root@localhost ~]# ip netns exec ns0 ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.024 ms
Transfer equipment
We can transfer devices (such as veth) between different Network Namespaces. Since a device can only belong to one Network Namespace, it will not be visible within the Network Namespace after the transfer.
veth devices are transferable devices, while many other devices (such as lo, vxlan, ppp, bridge, etc.) are not transferable.
veth pair
The veth pair is fully known as Virtual Ethernet Pair and is a pair of ports from which all packets entering from one end of the pair will come out of the other end and vice versa.
veth pair was introduced to communicate directly between different Network Namespaces, which can be used to connect two Network Namespaces directly.
Create veth pair
[root@localhost ~]# ip link add type veth [root@localhost ~]# ip a ......... 4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 52:d0:41:7f:d0:16 brd ff:ff:ff:ff:ff:ff 5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether f2:3b:9d:40:e3:f0 brd ff:ff:ff:ff:ff:ff
As you can see, a pair of Veth pairs has been added to the system to connect the two virtual network cards veth0 and veth1, which are not enabled for Veth pairs at this time.
Implement Network Namespace Communication
Here we use veth pair to communicate between two different Network Namespaces. Just now we have created a Network Namespace named ns0, and next we create an Information Network Namespace named ns1
[root@localhost ~]# ip netns add ns1 [root@localhost ~]# ip netns list ns1 ns0 (id: 0)
Then we add veth0 to ns0 and veth1 to ns1
//See [root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever //veth joins ns0 [root@localhost ~]# ip link set veth0 netns ns0 [root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: veth0@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 52:d0:41:7f:d0:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@localhost ~]# ip link set veth1 netns ns1 [root@localhost ~]# ip netns exec ns1 ip a 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 5: veth1@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether f2:3b:9d:40:e3:f0 brd ff:ff:ff:ff:ff:ff link-netns ns0
Then we configure the ip addresses on the Veth pairs separately and enable them
[root@localhost ~]# ip netns exec ns0 ip link set veth0 up [root@localhost ~]# ip netns exec ns0 ip addr add 192.168.1.1/24 dev veth0 [root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: veth0@if5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000 link/ether 52:d0:41:7f:d0:16 brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 192.168.1.1/24 scope global veth0 valid_lft forever preferred_lft forever
//Open Namespace [root@localhost ~]# ip netns exec ns1 ip link set lo up //Open veth1 [root@localhost ~]# ip netns exec ns1 ip link set veth1 up //Add IP for veth1 [root@localhost ~]# ip netns exec ns1 ip addr add 192.168.1.2/24 dev veth1 //View the status of this pair of Veth pairs [root@localhost ~]# ip netns exec ns1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether f2:3b:9d:40:e3:f0 brd ff:ff:ff:ff:ff:ff link-netns ns0 inet 192.168.1.2/24 scope global veth1 valid_lft forever preferred_lft forever inet6 fe80::f03b:9dff:fe40:e3f0/64 scope link valid_lft forever preferred_lft forever
As you can see above, we have successfully enabled this veth pair and assigned the corresponding ip address to each veth device. We tried to access the ip address in ns0 in ns1:
[root@localhost ~]# ip netns exec ns1 ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.045 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.029 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=0.048 ms ^Z
You can see that veth pair successfully implemented network interaction between two different Network Namespace s.
veth device rename
[root@localhost ~]# ip netns exec ns0 ip link set veth0 down [root@localhost ~]# ip netns exec ns0 ip link set dev veth0 name eth0 [root@localhost ~]# ip netns exec ns0 ip link set eth0 up [root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:d0:41:7f:d0:16 brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 192.168.1.1/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::50d0:41ff:fe7f:d016/64 scope link valid_lft forever preferred_lft forever
Four network mode configurations
bridge mode configuration
[root@localhost ~]# docker run -it --rm busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / # //Adding--network bridge when creating containers will have the same effect as not--network work option [root@localhost ~]# docker run -it --rm --network bridge busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / #
none mode configuration
[root@localhost ~]# docker run -it --rm --network none busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever / #
container Mode Configuration
Start the first container
[root@localhost ~]# docker run -it --name b1 --rm busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / #
Start the second container
[root@localhost ~]# docker run -it --name b2 --rm busybox/ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / #
You can see that the container IP address named b2 is 172.17.0.3, which is different from the IP address of the first container, that is, there is no shared network. If we change the way the second container is started, we can make the container IP named b2 consistent with the container IP of B1, that is, share IP, but do not share the file system.
[root@localhost ~]# docker run -it --name b2 --rm --network container:b1 busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / #
Here we create a directory on the b1 container
/ # mkdir /tmp/data / # ls /tmp/ data
Checking the / tmp directory on the b2 container will find that there is no such directory because the file system is isolated and simply shares the network.
Deploy a site on a b2 container
/ # echo 'hello xaw' > /tmp/index.html / # ls /tmp/ index.html / # httpd -h /tmp #-h Specify the location of the site / # netstat -antl Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 :::80 :::* LISTEN
Access this site with local address on b1 container
/ # wget -O - -q 172.17.0.2:80 hello xaw
Thus, the container-to-container relationship in container mode is equivalent to two different processes on one host
host mode configuration
Starting a container directly indicates that the mode is host
[root@localhost ~]# docker run -it --name b2 --rm --network host busybox / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel qlen 1000 link/ether 00:0c:29:36:5a:9e brd ff:ff:ff:ff:ff:ff inet 192.168.47.163/24 brd 192.168.47.255 scope global dynamic noprefixroute ens33 valid_lft 1089sec preferred_lft 1089sec inet6 fe80::fd6:eb9d:a09a:50a8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue link/ether 02:42:c6:33:e3:b4 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:c6ff:fe33:e3b4/64 scope link valid_lft forever preferred_lft forever 11: veth6eebe73@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 link/ether 6a:3f:54:d4:a8:51 brd ff:ff:ff:ff:ff:ff inet6 fe80::683f:54ff:fed4:a851/64 scope link valid_lft forever preferred_lft forever / #
At this point, if we launch an http site in this container, we can access the site in this container directly in the browser using the host's IP
Common operations for containers
View the host name of the container
[root@localhost ~]# docker run -it --name b2 --rm --network bridge busybox / # hostname 405945f0a5d9
Inject host name at container startup
[root@localhost ~]# docker run -it --hostname node1 --rm busybox / # hostname node1 / # cat /etc/hostname node1 / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 node1 / # ping node1 PING node1 (172.17.0.2): 56 data bytes 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.083 ms 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.040 ms 64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.043 ms ^Z[1]+ Stopped ping node1
Manually specify the DNS to be used by the container
[root@localhost ~]# docker run -it --rm --hostname node1 --dns 114.114.114.114 busybox / # hostname node1 / # cat /etc/resolv.conf search localdomain nameserver 114.114.114.114 / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 node1 / #
Manually inject host name to IP address mapping into/etc/hosts file
[root@localhost ~]# docker run -it --rm --hostname node1 --dns 114.114.114.114 --add-host node2:172.17.0.3 --add-host node3:172.17.0.3 busybox / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.3 node2 172.17.0.3 node3 172.17.0.2 node1
Open container port
There is a -p option to execute the docker run, which maps the application ports in the container to the host machine so that the external host can access the application in the container by accessing one of the host ports.
The -p option can be used multiple times and must expose ports that the container is actually listening on.
Use format of -p option:
Dynamic ports are random ports, and the specific mapping results can be viewed using the docker port command.
//-d Background Run [root@localhost ~]# docker run -d --name web --rm -p 80 nginx b7debc0fae6e78c9ea2c1aa2889e864fa91860e40f6770f704d25a8311c97f78
Let's open a new terminal connection to see what port 80 of the container is mapped to on the host machine
[root@localhost ~]# docker port web 80/tcp -> 0.0.0.0:49153 80/tcp -> :::49153
Thus, port 80 of the container is exposed to port 49153 of the host machine, and we visit this port on the host machine to see if we can access the site inside the container
[root@localhost ~]# curl http://192.168.47.163:49153 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
The second, and most commonly used, maps the 80 of the container to the specified 8080 port number of the machine
[root@localhost ~]# docker run -d --name web --rm -p 8080:80 nginx 1ec31fc84269e40dc5bdd46feca294c21667f42cf238b198956fdcc74bd2e545 //View Port [root@localhost ~]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:8080 0.0.0.0:* LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:8080 [::]:* LISTEN 0 128 [::]:22 [::]:* //Visit [root@localhost ~]# curl 192.168.47.163:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
Third way
The iptables firewall rules are automatically generated with the creation of the container and deleted with the deletion of the container. The exit of the container also deletes this rule
Map container ports to random ports of specified IP
[root@localhost ~]# docker run -d --name web --rm -p 192.168.47.163::80 nginx d253bcb06661c033a7af26b3495cd6b5b485e4361b7aa782559677197914a49b
View port mappings on another terminal
[root@localhost ~]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 192.168.47.163:49153 0.0.0.0:* LISTEN 0 128 [::]:22 //Visit [root@localhost ~]# curl 192.168.47.163:49153 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> //mapping [root@localhost ~]# docker port web 80/tcp -> 192.168.47.163:49153
Fourth way
[root@localhost ~]# docker run -d --name web --rm -p 127.0.0.1:80:80 nginx ec9a4062ddbc7ba6ac8788dc393c9fb5e97220fa69b7e2ea68604b5279322e18 [root@localhost ~]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 127.0.0.1:80 0.0.0.0:* LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:22 [::]:* [root@localhost ~]# curl 127.0.0.1 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style>