Configuration of docker network

Posted by jeffkee on Wed, 08 Dec 2021 00:08:52 +0100

Configuration of docker network

The creation of namespace in Linux kernel

ip netns command

You can complete various operations on the Network Namespace with the help of the ip netns command. The ip netns command comes from the iproute installation package. Generally, the system will install it by default. If not, please install it yourself.

Note: sudo permission is required when the ip netns command modifies the network configuration.

You can complete the operations related to the Network Namespace through the ip netns command. You can view the command help information through the ip netns help:

[root@localhost ~]# ip netns help (a command belonging to the Linux kernel, so "--" help is not added)
Usage:  ip netns list
        ip netns add NAME
        ip netns attach NAME PID
        ip netns set NAME NETNSID
        ip [-all] netns delete [NAME]
        ip netns identify [PID]
        ip netns pids NAME
        ip [-all] netns exec [NAME] cmd ...
        ip netns monitor
        ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT]
NETNSID := auto | POSITIVE-INT

By default, there is no Network Namespace in the Linux system, so the ip netns list command will not return any information.

[root@localhost ~]# ip netns list # list
[root@localhost ~]#

Create a Network Namespace

Create a namespace named ns0 through the command:

[root@localhost ~]# ip netns add ns0 # Create namespace for ns0
[root@localhost ~]# ip netns list # list
ns0

The newly created Network Namespace will appear in the / var/run/netns / directory. If a namespace with the same name already exists, the command will report the error of "Cannot create namespace file" / var/run/netns/ns0 ": File exists.

[root@localhost ~]# ls /var/run/netns
ns0
[root@localhost ~]# ip netns add ns0
Cannot create namespace file "/var/run/netns/ns0": File exists

# Manual creation under / var/run/netns is also not recognized
[root@localhost ~]# touch /var/run/netns/ns1
[root@localhost ~]# ip netns list
Error: Peer netns reference is invalid. # report errors
Error: Peer netns reference is invalid.
ns1
ns0

# delete
[root@localhost ~]# ip netns del ns1
[root@localhost ~]# ip netns list # Check again, no error is reported
ns0

For each Network Namespace, it will have its own independent network card, routing table, ARP table, iptables and other network related resources.

Operation Network Namespace

The ip command provides the ip netns exec subcommand, which can be executed in the corresponding Network Namespace.

View the network card information of the newly created Network Namespace

[root@localhost ~]# ip netns exec ns0 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

You can see that a lo loopback network card will be created by default in the newly created Network Namespace, and the network card is closed at this time. At this time, if you try to ping the lo loopback network card, you will be prompted that Network is unreachable

[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
connect: Network is unreachable # Connection: network unreachable

Enable lo loopback network card with the following command:

[root@localhost ~]# ip netns exec ns0 ip link set lo up
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.053 ms

Transfer equipment

We can transfer devices (such as veth) between different network namespaces. Since a device can only belong to one Network Namespace, the device cannot be seen in the Network Namespace after transfer.

Among them, veth devices are transferable devices, while many other devices (such as lo, vxlan, ppp, bridge, etc.) are not transferable.

veth pair

The full name of veth pair is Virtual Ethernet Pair. It is a pair of ports. All packets entering from one end of the pair of ports will come out from the other end, and vice versa.
veth pair is introduced to communicate directly in different network namespaces. It can be used to connect two network namespaces directly.

Create veth pair

[root@localhost ~]# ip a  # Before creation
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:21:52:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.220.17/24 brd 192.168.220.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1792:21f6:7f28:5ffa/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:a1:e4:66:9d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
       
[root@localhost ~]# ip link add type veth # establish
[root@localhost ~]# ip a # see
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:21:52:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.220.17/24 brd 192.168.220.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1792:21f6:7f28:5ffa/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:a1:e4:66:9d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000  # Newly created
    link/ether d6:90:9d:4e:95:77 brd ff:ff:ff:ff:ff:ff
5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000  # Newly created
    link/ether 9e:79:1e:8a:72:3d brd ff:ff:ff:ff:ff:ff

You can see that a pair of Veth pairs are added to the system to connect the two virtual network cards veth0 and veth1. At this time, the pair of Veth pairs are in the "not enabled" state.

Enable communication between network namespaces

Next, we use veth pair to realize the communication between two different network namespaces. Just now, we have created a Network Namespace named ns0. Next, we will create another information Network Namespace named ns1

[root@localhost ~]# ip netns list
ns0
[root@localhost ~]# ip netns add ns1
[root@localhost ~]# ip netns list
ns1
ns0

Then we add veth0 to ns0 and veth1 to ns1

[root@localhost ~]# ip link set veth0 netns ns0
[root@localhost ~]# ip link set veth1 netns ns1

Then we configure the ip addresses for these Veth pairs and enable them

[root@localhost ~]# ip netns exec ns0 ip link set veth0 up
[root@localhost ~]# ip netns exec ns0 ip addr add 192.168.2.1/24 dev veth0

[root@localhost ~]# ip netns exec ns1 ip link set veth1 up
[root@localhost ~]# ip netns exec ns1  ip addr add 192.168.2.2/24 dev veth1

View the status of this pair of Veth pairs

[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d6:90:9d:4e:95:77 brd ff:ff:ff:ff:ff:ff link-netns ns1
    inet 192.168.2.1/24 scope global veth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d490:9dff:fe4e:9577/64 scope link 
       valid_lft forever preferred_lft forever
       
       
[root@localhost ~]#  ip netns exec ns1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9e:79:1e:8a:72:3d brd ff:ff:ff:ff:ff:ff link-netns ns0
    inet 192.168.2.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::9c79:1eff:fe8a:723d/64 scope link 
       valid_lft forever preferred_lft forever

As can be seen from the above, we have successfully enabled this veth pair and assigned the corresponding ip address to each veth device. We try to access the ip address in ns0 in ns1:

[root@localhost ~]# ip netns exec ns1 ping 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.223 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=0.106 ms

As you can see, veth pair successfully realizes the network interaction between two different network namespaces.5555kkkkkkkkkk

veth device rename

Rename veth0 in ns0

[root@localhost ~]# ip netns exec ns0 ip link set veth0 down # Close veth0
[root@localhost ~]# ip netns exec ns0 ip link set dev veth0 name eth0 # Rename veth0 to eth0
[root@localhost ~]# ip netns exec ns0 ip link set eth0 up # Enable eth0
[root@localhost ~]# ip netns exec ns0 ip a # see
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 # Rename succeeded
    link/ether d6:90:9d:4e:95:77 brd ff:ff:ff:ff:ff:ff link-netns ns1
    inet 192.168.2.1/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d490:9dff:fe4e:9577/64 scope link 
       valid_lft forever preferred_lft forever

Rename veth0 in ns1

[root@localhost ~]# ip netns exec ns1 ip link set veth1 down #Close veth1
[root@localhost ~]# ip netns exec ns1 ip link  set dev veth1 name eth0 # Rename veth1 to eth0
[root@localhost ~]# ip netns exec ns1 ip link set eth0 up # Enable eth0
[root@localhost ~]# ip netns exec ns1 ip a # see
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 # Rename succeeded
    link/ether 9e:79:1e:8a:72:3d brd ff:ff:ff:ff:ff:ff link-netns ns0
    inet 192.168.2.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::9c79:1eff:fe8a:723d/64 scope link 
       valid_lft forever preferred_lft forever

Four network mode configurations

bridge mode configuration
[root@localhost ~]# docker pull busybox
[root@localhost ~]# docker run -it --name b1 --rm busybox  # --rm automatically destroys containers after deleting or stopping them
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # exit

[root@localhost ~]# docker run -it --name b1 --network bridge --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # exit

When creating a container, adding the – network bridge option has the same effect as not adding the – network option. The default mode is

none mode
[root@localhost ~]# docker run -it --name b2 --rm --network none busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ # exit

Using the none mode, the Docker container has its own Network Namespace, but it does not make any network configuration for the Docker container. That is, the Docker container does not have network card, IP, routing and other information. We need to add network card and configure IP for the Docker container ourselves.

container mode
# Start the first container
[root@localhost ~]# docker run -it --name b3 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

# Reopen a terminal and use the container mode to compare with the first container
[root@localhost ~]# docker run -it --name b4 --rm --network container:b3 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #


# Create a directory on the b3 container
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # mkdir QAQ
/ # ls
QAQ   bin   dev   etc   home  proc  root  sys   tmp   usr   var

# View on b4
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var

# If you check b4 container, you will find that there is no such directory, because the file system is isolated and only shares the network.

# Deploy a web site on b3
/ # echo "This is a pig." > QAQ/index.html
/ # httpd -h QAQ/
/ # netstat -antl
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 :::80                   :::*                    LISTE

# Access on b4
/ # wget -qO - 172.17.0.2
This is a pig.
/ #
# It can be seen that the relationship between containers in container mode is equivalent to two different processes on a host

This mode specifies that the newly created container and an existing container share a Network Namespace instead of sharing with the host. The newly created container will not create its own network card and configure its own IP, but share IP, port range, etc. with a specified container. Similarly, in addition to the network, the other two containers, such as file system, process list, etc Isolated. The processes of the two containers can communicate through the lo network card device.

host mode

Directly indicate that the mode is host when starting the container

[root@localhost ~]# docker run -it --name b5 --rm --network host busybox
/ # ip a  # container
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel qlen 1000
    link/ether 00:0c:29:21:52:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.220.17/24 brd 192.168.220.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1792:21f6:7f28:5ffa/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue 
    link/ether 02:42:a1:e4:66:9d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a1ff:fee4:669d/64 scope link 
       valid_lft forever preferred_lft forever
       
[root@localhost ~]# ip a # Host computer
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:21:52:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.220.17/24 brd 192.168.220.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1792:21f6:7f28:5ffa/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:a1:e4:66:9d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a1ff:fee4:669d/64 scope link 
       valid_lft forever preferred_lft forever      
  
# At this point, we start a site in the container, and we can directly access the site in the container in the browser with the IP of the host   
# Container deploy a web site
/ # mkdir www
/ # echo "This is a cat." > www/index.html
/ # httpd -h www/

# Host access
[root@localhost ~]# curl 192.168.220.17
This is a cat.

If the host mode is used when starting the container, the container will not obtain an independent Network Namespace, but share a Network Namespace with the host computer. Instead of virtualizing its own network card and configuring its own IP, the container will use the IP and port of the host computer. However, other aspects of the container, such as file system and process list, are still separated from the host computer Away.

Common operations of containers

View the host name of the container
[root@localhost ~]# docker run -it --name b6 --rm busybox
/ # hostname
322e0365483b
Inject hostname when container starts
[root@localhost ~]# docker run -it --name b7 --rm --hostname glfqdp busybox
/ # hostname
glfqdp
/ # cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      glfqdp   # Host name to IP mapping is automatically created when host name is injected
/ # cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 114.114.114.114   # DNS is also automatically configured as the DNS of the host
nameserver 8.8.8.8
/ # ping baidu.com
PING baidu.com (220.181.38.251): 56 data bytes
64 bytes from 220.181.38.251: seq=0 ttl=127 time=30.270 ms
64 bytes from 220.181.38.251: seq=1 ttl=127 time=30.020 ms
/ # exit
Manually specify the DNS to be used by the container
[root@localhost ~]# docker run -it --name b8 --rm --dns 8.8.8.8 --hostname glfqdp busybox
/ # cat /etc/resolv.conf 
nameserver 8.8.8.8
/ # exit
Manually inject the host name to IP address mapping into the / etc/hosts file
[root@localhost ~]# docker run -it --name b9 --rm --hostname lplp --add-host baidu.com:8.8.8.8 busybox
/ # cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
8.8.8.8 baidu.com
172.17.0.2      lplp
/ # exit
Port mapping

When docker run is executed, there is a - p option to map the application ports in the container to the host, so that the external host can access the applications in the container by accessing a port of the host.

-The p option can be used multiple times, and the port it can expose must be the port that the container is actually listening to.

-Use format of p option:

  • -p < containerPort >
    • Maps the specified container port to a dynamic port at all addresses of the host

Dynamic ports refer to random ports. The specific mapping results can be viewed using the docker port command.

# Map the 80 port of nginx in the container to the random port of the host
[root@localhost ~]# docker run -d --name web --rm -p 80 yunjisuanlp/nginx:v3
acaea4eab08b9937b06dfe93da3d86795ac859c29a60ee0edc8f120aaf9d29ab

[root@localhost ~]# docker port web
80/tcp -> 0.0.0.0:49153  # ipv4
80/tcp -> :::49153      # ipv6

It can be seen that port 80 of the container is exposed to port 49153 of the host. At this time, we can access this port on the host to see if we can access the sites in the container

[root@localhost ~]# curl 192.168.220.17:49153
welcome to nginx!

iptables firewall rules will be generated automatically with the creation of the container and deleted automatically with the pause / deletion of the container.

  • -p < hostPort >:< containerPort>
    • Map the container port to the specified host port

Map the container port to the specified port of the host

[root@localhost ~]# docker run -itd --name web --rm -p 8080:80 yunjisuanlp/nginx:v3

[root@localhost ~]# docker port web
80/tcp -> 0.0.0.0:8080
80/tcp -> :::8080

# Host access
[root@localhost ~]# curl 192.168.220.17:8080
welcome to nginx!
  • -p < ip >::< containerPort >
    • Maps the specified container port to the dynamic port specified by the host

Maps the specified container port to a random port of the host specified IP

[root@localhost ~]# docker run -itd --name web --rm -p 192.168.220.17::80 yunjisuanlp/nginx:v3
68d446f3c450ef707519dc92cc55adba837623eaa63062b49abc8af07c1e5b35

[root@localhost ~]# docker port web
80/tcp -> 192.168.220.17:49153

# Host access
[root@localhost ~]# curl 192.168.220.17:49153
welcome to nginx!
  • -p < ip >:< hostPort >:< containerPort >
    • Map the specified container port to the port specified by the host

Maps the specified container port to the specified port of the specified host IP

[root@localhost ~]# docker run -itd --name web --rm -p 192.168.220.17:9999:80 yunjisuanlp/nginx:v3
1deff8a4a3f28a3ed2661907edd9f64c4878373e44260698cb60875e860010df

[root@localhost ~]# docker port web
80/tcp -> 192.168.220.17:9999

# Host access
[root@localhost ~]# curl 192.168.220.17:9999
welcom to nginx!

-P (in uppercase) publishes all exposed ports in the container to the random ports of the host

Network attribute information of custom docker0 Bridge

Official document related configuration

To customize the network attribute information of docker0 bridge, you need to modify the / etc/docker/daemon.json configuration file

[root@localhost ~]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://wn5c7d7w.mirror.aliyuncs.com"],
  "bip": "192.168.2.1/24" # Change the docker0 network card IP of the host
}

[root@localhost ~]# systemctl restart docker

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:21:52:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.220.17/24 brd 192.168.220.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1792:21f6:7f28:5ffa/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:a1:e4:66:9d brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 brd 192.168.2.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a1ff:fee4:669d/64 scope link 
       valid_lft forever preferred_lft forever

Before the docker0 ip is changed, the default is 172.17.0.1/16. The core option is bip, which means bridge ip. It is used to specify the IP address of the docker0 bridge itself; Other options can be calculated from this address.

Create a container to view the IP address

[root@localhost ~]# docker run -itd --name web --rm yunjisuanlp/nginx:v3
94b02bec9a8e03d8c73f19282f5954ba70c53130fd5a8d4c8af3bf3fe9ff8fdd

[root@localhost ~]# docker exec -it web /bin/bash
[root@94b02bec9a8e /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
46: eth0@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0 # The default IP address also becomes 192
       valid_lft forever preferred_lft forever
docker create custom bridge

Create an additional custom bridge, which is different from docker0

[root@localhost ~]# docker network create -d bridge --subnet "172.17.2.0/24" --gateway "172.17.2.1" br0
f96a9671bfa582b925305f8890c7fadf4b54cda6410cd238786dc7b0574700a5

[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
f96a9671bfa5   br0       bridge    local
788ac3e94c5a   bridge    bridge    local
cd5368439dc0   host      host      local
c49a1db81682   none      null      local

Create a container using the newly created custom bridge:

[root@localhost ~]# docker run -itd --name web01 --rm --network br0 yunjisuanlp/nginx:v3
a98412139dc85eae51f6994737f24c56b2be3dac7211d7734fc099e8031904a4

[root@localhost ~]# docker exec -it web01 /bin/bash
[root@a98412139dc8 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
49: eth0@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:02:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.2.2/24 brd 172.17.2.255 scope global eth0
       valid_lft forever preferred_lft forever

Create another container and use the default bridge:

[root@localhost ~]# docker run -itd --name web02 --rm yunjisuanlp/nginx:v3
65d36dd328f7f522c3808917d2289ea84e69e9faa404ae7bc523138b4ff1292e

[root@localhost ~]# docker exec -it web02 /bin/bash
[root@65d36dd328f7 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
51: eth0@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever

Imagine whether b2 and b1 can communicate with each other at this time? If not, how to realize communication?

# Run two containers in different network segments
[root@localhost ~]# docker run -itd --name c1 --rm --network br0 yunjisuanlp/nginx:v3
b3b6e6dc9e2b486519acc5fd53ed4e911493715a097ebfddb53a509be12a6c80
[root@localhost ~]# docker run -itd --name c2 --rm yunjisuanlp/nginx:v3
0ed765ee0e78132eac679b0da613cccf7196240ba5cde093b47593666fbadad7
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS     NAMES
0ed765ee0e78   yunjisuanlp/nginx:v3   "/usr/local/nginx/sb..."   4 seconds ago    Up 3 seconds              c2
b3b6e6dc9e2b   yunjisuanlp/nginx:v3   "/usr/local/nginx/sb..."   14 seconds ago   Up 12 seconds             c1


[root@localhost ~]# docker exec -it c1 /bin/bash
[root@b3b6e6dc9e2b /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
57: eth0@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:02:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.2.2/24 brd 172.17.2.255 scope global eth0  # 172 network segment
       valid_lft forever preferred_lft forever
[root@b3b6e6dc9e2b /]#


[root@localhost ~]# docker exec -it c2 /bin/bash
[root@0ed765ee0e78 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
59: eth0@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0 # 192 network segment
       valid_lft forever preferred_lft forever

Connect br0 network (C1) to C2 (one container runs two bridges)

[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS     NAMES
0ed765ee0e78   yunjisuanlp/nginx:v3   "/usr/local/nginx/sb..."   6 minutes ago   Up 6 minutes             c2
b3b6e6dc9e2b   yunjisuanlp/nginx:v3   "/usr/local/nginx/sb..."   6 minutes ago   Up 6 minutes             c1

[root@localhost ~]# Docker network connect BR0 0ed765ee0e78 (C2 container ID)

# View c2
[root@0ed765ee0e78 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
59: eth0@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
61: eth1@if62: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:02:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.2.3/24 brd 172.17.2.255 scope global eth1
       valid_lft forever preferred_lft forever   # Network segment with c1 added
[root@0ed765ee0e78 /]# ping 172.17.2.2  # ping c1 container address
PING 172.17.2.2 (172.17.2.2) 56(84) bytes of data.
64 bytes from 172.17.2.2: icmp_seq=1 ttl=64 time=0.112 ms
64 bytes from 172.17.2.2: icmp_seq=2 ttl=64 time=1.21 ms       

So you can communicate

Topics: Linux Docker network