Using tc to simulate network latency and packet loss under Linux

Posted by fredmeyer on Fri, 24 May 2019 01:49:42 +0200

1. Introduction to analog delay transmission

Netem and tc: netem are a network simulation function module provided by Linux 2.6 and above. The function module can be used to simulate complex Internet transmission performance, such as low bandwidth, transmission delay, packet loss and so on, in a LAN with good performance. Many distributions of Linux using the Linux 2.6 (or above) kernel have turned on the kernel functions, such as Fedora, Ubuntu, Redhat, OpenSuse, CentOS, Debian, and so on. TC is a tool in Linux system, full name traffic control. TC can be used to control the working mode of netem, that is to say, if you want to use netem, you need at least two conditions, one is that the NETEM function in the kernel is included, the other is to have tc.

It should be noted that the flow control introduced in this paper can only control the outgoing action, not the receiving action. At the same time, it directly takes effect on the physical interface. If the physical eth0 is controlled, then the logical network card (such as eth0:1) will also be affected. On the contrary, if you control the logical network card, the control may be invalid. Note: Multiple network cards in the virtual machine can be regarded as multiple physical network cards in the virtual machine.

tc qdisc add dev eth0 root netem delay 100ms

//This command sets the transmission of eth0 network card to be delayed by 100 milliseconds

In a more realistic case, the delay value will not be so precise, there will be some fluctuations, we can use the following situation to simulate the delay value with volatility:

tc qdisc add dev eth0 root netem delay 100ms 10ms

//This command sets the transmission of eth0 network card to be delayed by 100ms + 10ms (any value between 90 and 110 ms)

The randomness of such fluctuations can be further enhanced:

tc qdisc add dev eth0 root netem delay 100ms 10ms 30%

//This command sets the eth0 network card transmission to 100 ms, while about 30% of the packets are delayed by +10 Ms.

2. Analog Network Packet Loss

tc qdisc add dev eth0 root netem loss 1%

//This command sets the eth0 network card transmission to randomly drop 1% of the packets.

The success rate of packet loss can also be set:

tc qdisc add dev eth0 root netem loss 1% 30%

//This command sets the eth0 network card transmission to randomly drop 1% of the packets, and the success rate is 30%. 

3. Delete the relevant configuration on the network card

Change the add in the previous command to del to delete the configuration

TC qdisc del dev eth0 XXXXXX (self-added configuration)

// This command will delete the relevant transport configuration of eth0 network card

So far, we have been able to simulate certain network latency and packet loss through TC in the test environment. The following is more about the application and introduction of tc.

4. Simulated Packet Repetition

tc qdisc add dev eth0 root netem duplicate 1%

//This command sets eth0 network card transmission to random 1% duplicate packets 

5. Analog Packet Damage

tc qdisc add dev eth0 root netem corrupt 0.2%

//This command sets the eth0 network card transmission to generate 0.2% of the damaged packets randomly (the kernel version needs to be more than 2.6.16)

6. Analog Data Disordering

tc qdisc change dev eth0 root netem delay 10ms reorder 25% 50%

//This command sets the eth0 network card transmission to have 25% Data package(50%Relevant)Will be sent immediately, other delays10 second

In the new version, the following commands can also disrupt the order of sending packages to a certain extent:

tc qdisc add dev eth0 root netem delay 100ms 10ms

7. View the configured network conditions

tc qdisc show dev eth0

//This command will view and display the relevant transport configuration of eth0 network card

8. Introduction of TC Flow Control

In linux, TC has two control methods CBQ and HTB.HTB are designed to replace CBQ. It is a hierarchical filtering framework.

TC consists of three basic components: queueing discipline, class and classifier.

(1) Queueing discipline in TC

It is used to control the sending and receiving speed of the network. Through queues, linux can cache network packets, and then smooth network traffic without interrupting connections (such as TCP) according to user settings.

It should be noted that linux does not control the receiving queue well enough, so we usually only use the sending queue, that is, "control the sending without control the receiving". It encapsulates two other major TC components (classes and classifiers). If the kernel needs to send data packets through a network interface, it needs to queue the data packets according to the qdisc (queuing rule) configured for this interface. Then, the kernel takes out as many data packets as possible from the qdisc and gives them to the network adapter driver module.

The simplest QDisc is pfifo, which does not process incoming packets. Packets are queued in a first-in, first-out manner. However, it saves packets that the network interface can't handle for a while.

Queue rules include FIFO (first in first out), RED (random early detection), SFQ (random fair queue) and Token Bucket (Token Bucket), CBQ is a super queue, that is, it can contain other queues (or even other CBQ).

(2) Class class in TC

class is used to represent control strategies. Obviously, many times, we may have to implement different traffic control strategies for different IP. At this time, we have to use different classes to express different traffic control strategies.

(3) Filter Rules in TC

Filters are used to integrate users into specific control strategies (i.e., different classes). For example, we want to implement different control strategies (A,B) for xxa and xxb IP. At this time, we can use filter to classify xxa into control strategy A and xxb into control strategy B. The label bit divided by filter can be realized by u32 labeling function or iptables set-mark function.

At present, the filters that TC can use are: fwmark classifier, u32 classifier, Routing-based classifier and RSVP classifier (IPV6, IPV4, respectively). The fwmark classifier allows us to select traffic using Linux netfilter code, while the u32 classifier allows us to select traffic based on ANY header. It's important to note that filters are inside QDisc, and they can't act as subjects.

(4) Application process of TC

Packet - > iptables (iptables set different marks - > TC (class) - > TC (queue) according to different IPs when passing through iptables)

(5) Application

Suppose eth0 bit is the external network interface of the server. Before you start, clear all queue rules for eth0

tc qdisc del dev eth0 root 2> /dev/null > /dev/null

1) Define top-level (root) queue rules and specify default category number

tc qdisc add dev eth0 root handle 1: htb default 2

2) Defining the 1:1 category (speed) of the first layer was supposed to define the second leaf category more, but for now, this application is OK.

tc class add dev eth0 parent 1:1 classid 1:2 htb rate 98mbit ceil100mbit prio 2 
tc class add dev eth0 parent 1:1 classid 1:3 htb rate 1mbit ceil 2mbit prio 2

Note: The above is that we control the speed of the output server. One is 98M and the other is 2M.

rate: The bandwidth value guaranteed by a class. If there are more than one class, make sure that the sum of all subclasses is less than or equal to the parent class.
Prio: Used to indicate competitiveness when borrowing bandwidth, the smaller the prio, the higher the priority, the stronger the competitiveness
ceil: is the maximum bandwidth available for a class

At the same time, in order not to make a session occupy the bandwidth forever, a fair queue sfq is added.

tc qdisc add dev eth0 parent 1:2 handle 2: sfq perturb 10 
tc qdisc add dev eth0 parent 1:3 handle 3: sfq perturb 10

3) Setting filter

Filters can use their own u32 or iptables to mark
Specifies in the root class 1:0, for 192.168.0.2 filtering, use the 1:2 rule to give him 98M speed, as follows:

tc filter add dev eth0 protocol ip parent 1:0 u32 match ip src 192.168.0.2 flowid 1:2
tc filter add dev eth0 protocol ip parent 1:0 u32 match ip src 192.168.0.1 flowid 1:3

If all IPS are written as follows:

tc filter add dev eth0 protocol ip parent 1: prio 50 u32 match ip dst 0.0.0.0/0 flowid 1:10

//Use Iptables to cooperate with filters

You can also use this method, but you need to tag it with the following iptables commands

tc filter add dev eth0 parent 1: protocol ip prio 1 handle 2 fw flowid 1:2 
tc filter add dev eth0 parent 1: protocol ip prio 1 handle 2 fw flowid 1:3

iptables just need to be marked

iptables -t mangle -A POSTROUTING -d 192.168.0.2 -j MARK --set-mark 10 iptables -t mangle -A POSTROUTING -d 192.168.0.3 -j MARK --set-mark 20

(6) TC's control of the highest speed

Rate ceiling rate limit
The parameter ceil specifies the maximum bandwidth a class can use to limit how much bandwidth a class can borrow. The default ceil is the same as the rate.

This feature is useful for ISPs because they generally limit the total number of users being served even if other users do not request services. (ISPS wants users to pay more for better services.) Note root class is not allowed to be borrowed, so no ceil is specified.

Note: The value of ceil should be at least as high as the rate of the class it belongs to, which means that ceil should be at least as high as any of its subclasses.

(7) Burst burst

Network hardware can send only one packet at a time, depending only on the speed of one hardware. Link sharing software can use this capability to dynamically generate multiple connections running at different speeds. So the rate and ceil are not a real-time measure, but an average of the number of packets sent in a single time. The actual situation is how to make a class with small traffic available to other classes at the maximum rate at a certain time.

Burst and cburst parameters control how much data can be effortlessly sent to other classes required at the fastest hardware speed. If cburst is less than a theoretical packet, the burst generated by it will not exceed the ceil rate, and the maximum rate of TBF in the same way is the same.

You may ask why bursts are needed. Because it can easily improve response speed on a very crowded link. For example, WWW traffic is unexpected. When you visit the home page, you suddenly get it and read it. In your spare time, burst and charge it again.

Note: burst and cburst should be at least as large as their subclasses.

(8) TC command format

join

tc qdisc [ add | change | replace | link ] dev DEV [ parent qdisc-id | root ] [ handle qdisc-id ] qdisc[ qdisc specific parameters ]
tc class [ add | change | replace ] dev DEV parent qdisc-id [ classid class-id ] qdisc [ qdisc specific parameters ]
tc filter [ add | change | replace ] dev DEV [ parent qdisc-id | root ] protocol protocol prio priorityfiltertype [ filtertype specific parameters ] flowid flow-id

display

tc [-s | -d ] qdisc show dev DEV 
tc [-s | -d ] class show dev DEV 
tc filter show dev DEV

View the status of TC

tc -s -d qdisc show dev eth0
tc -s -d class show dev eth0

Delete tc rule

tc qdisc del dev eth0 root

Example

1) Use TC download to limit single IP for speed control

tc qdisc add dev eth0 root handle 1: htb r2q 1 
tc class add dev eth0 parent 1: classid 1:1 htb rate 30mbit ceil 60mbit 
tc filter add dev eth0 parent 1: protocol ip prio 16 u32 match ip dst 192.168.1.2  flowid 1:1

It can limit the download speed of 192.168.1.2 to 30 Mbit and up to 60 Mbit, where r2q refers to the root without default, so that the bandwidth of the whole network is unlimited.

2) Using TC to control the whole IP speed

tc qdisc add dev eth0 root handle 1: htb r2q 1 
tc class add dev eth0 parent 1: classid 1:1 htb rate 50mbit ceil 1000mbit 
tc filter add dev eth0 parent 1: protocol ip prio 16 u32 match ip dst 192.168.111.0/24 flowid 1:1

It can limit the bandwidth from 192.168.111.0 to 255 to 3000 k, and the actual download speed is about 200 K. In this case, all machines in this segment share the 200 K bandwidth.

You can also join a SFQ (Random Fair Queue)

tc qdisc add dev eth0 root handle 1: htb r2q 1 
tc class add dev eth0 parent 1: classid 1:1 htb rate 3000kbit burst 10k 
tc qdisc add dev eth0 parent 1:1 handle 10: sfq perturb 10 
tc filter add dev eth0 parent 1: protocol ip prio 16 u32 match ip dst 192.168.111.168 flowid 1:1

sfq, which prevents one ip in a segment from occupying the entire bandwidth.

3) Using TC to control the external speed of the server to 10M

As follows, I want to manage a server that can only send 10M of data to the outside world.

tc qdisc del dev eth0 root 
tc qdisc add dev eth0 root handle 1:htb 
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit ceil 100mbit 
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 10mbit ceil 10mbit 
tc qdisc add dev eth0 parent 1:10 sfq perturb 10
tc filter add dev eth0 protocol ip parent 1: prio 2u32 match ip dst 220.181.xxx.xx/32 flowid 1:1 

The above one, let 220.181.xxx.xx/32 run by default, mainly to make the ip connection come in uncontrolled.

tc filter add dev eth0 protocol ip parent 1: prio 50 u32 match ip dst 0.0.0.0/0 flowid 1:10 

By default, all traffic is passed from this

Reference sources: http://blog.csdn.net/weiweicao0429/article/details/17578011

Topics: network iptables Linux less