It is inevitable that the packet congestion generates during the packet forwarding, at this very moment, the queueing, the most widely used technology in congestion management, is introduced.The queueing technology orders packets in the buffer. When the packet rate exceeds the interface bandwidth or the bandwidth allocated to the queue that buffers packets, the packets are buffered in queues and wait to be forwarded. The queue scheduling algorithm determines the order in which packets are leaving a queue and the relationships between queues. In this post, we are going to introduce some widely used scheduling algorithms that used in the packets scheduling.
FIFO(first in first out)
The most commonly and widely used scheduling algorithm. FIFO doesn’t class the pacekts into different classifications. It allows the packets that come earlier to enter the queue first. On the exit of a queue, FIFO allows the packets to leave the queue in the same order as that in which the packets enter the queue. FIFO is just like the people line up in the market, when someone arrives at the checkstand, he joins the queue. And they leave the market according to the arriving order.

SP(Strict Priority)
SP schedules packets strictly based on queue priorities. Packets in queues with a low priority can be scheduled only after all packets in queues with a high priority have been scheduled. When packets leave queues, the device forwards the packets in the descending order of priorities. Packets in the higher-priority queue are forwarded preferentially. If packets in the higher-priority queue come in between packets in the lower-priority queue that is being scheduled, the packets in the high-priority queue are still scheduled preferentially. This implementation ensures that packets in the higher-priority queue are always forwarded preferentially. As long as there are packets in the high queue no other queue will be served.
The disadvantage of SP is that the packets in lower-priority queues are not processed until all the higher-priority queues are empty. As a result, a congested higher-priority queue causes all lower-priority queues to starve.
The high priority queue is just like the VIP gate, the packets in this queue have the highest priority to pass through.

RR(Round Robin)
RR schedules multiple queues in ring mode. If the queue on which RR is performed is not empty, the scheduler takes one packet away from the queue. If the queue is empty, the queue is skipped, and the scheduler does not wait.

WRR(weighted round robin)
Compared with RR, WRR can set the weights of queues. During the WRR scheduling, the scheduling chance obtained by a queue is in direct proportion to the weight of the queue. RR scheduling functions the same as WRR scheduling in which each queue has a weight 1.
WRR configures a counter for each queue and initializes the counter based on the weight values. Each time a queue is scheduled, a packet is taken away from the queue and being transmitted, and the counter decreases by 1. When the counter becomes 0, the device stops scheduling the queue and starts to schedule other queues with a non-0 counter. When the counters of all queues become 0, all these counters are initialized again based on the weight, and a new round of WRR scheduling starts. In a round of WRR scheduling, the queues with the larger weights are scheduled more times.

In an example, three queues with the weight 50%, 25%, and 25% respectively are configured with WRR scheduling.
The counters are initialized first: Count[1] = 2, Count[2] = 1, and Count[3] = 1.
1. First round of WRR scheduling:
Packet 1 is taken from queue 1, with Count[1] = 1. Packet 5 is taken from queue 2, with Count[2] = 0. Packet 8 is taken from queue 3, with Count[3] = 0.
2. Second round of WRR scheduling:
Packet 2 is taken from queue 1, with Count[1] = 0. Queues 2 and 3 do not participate in this round of WRR scheduling since Count [2] = 0 and Count[3] = 0.
Then, Count[1] = 0; Count[2] = 0; Count[3] = 0. The counters are initialized again: Count[1] = 2; Count[2] = 1; Count[3] = 1.
3. Third round of WRR scheduling:
Packet 3 is taken from queue 1, with Count[1] = 1. Packet 6 is taken from queue 2, with Count[2] = 0. Packet 9 is taken from queue 3, with Count[3] = 0.
4. Fourth round of WRR scheduling:
Packet 4 is taken from queue 1, with Count[1] = 0. Queues 2 and 3 do not participate in this round of WRR scheduling since Count [2] = 0 and Count[3] = 0.
Then, Count[1] = 0; Count[2] = 0; Count[3] = 0. The counters are initialized again: Count[1] = 2; Count[2] = 1; Count[3] = 1.
In statistical terms, you can see that the times for the packets to be scheduled in each queue is in direct ratio to the weight of this queue. The higher the weight, the more the times of scheduling. If the interface bandwidth is 100 Mbit/s, the queue with the lowest weight can obtain a minimum bandwidth of 25 Mbit/s, preventing packets in the lower-priority queue from being starved out when SP scheduling is implemented. During the WRR scheduling, the empty queue is directly skipped. Therefore, when the rate at which packets arrive at a queue is low, the remaining bandwidth of the queue is used by other queues based on a certain proportion. WRR scheduling has two disadvantages:
1. WRR schedules packets based on the number of packets. Therefore, each queue has no fixed bandwidth. With the same scheduling chance, a long packet obtains higher bandwidth than a short packet. Users are sensitive to the bandwidth. When the average lengths of the packets in the queues are the same or known, users can obtain expected bandwidth by configuring WRR weights of the queues; however, when the average packet length of the queues changes, users cannot obtain expected bandwidth by configuring WRR weights of the queues.
2. Services that require a short delay cannot be scheduled in time.
For the same pacekts, it takes three rounds to finish poll all the packets.

WFQ(Weighted Fair Queuing)
WFQ allocates bandwidths to flows based on the weight. In addition, to allocate bandwidths fairly to flows, WFQ schedules packets in bits. The picture below shows how bit-by-bit scheduling works.


