QoS Issues - Issue 1 QoS Overview Highlighted

Latest reply: Aug 22, 2018 03:24:08 2642 2 0 1

QoS Overview

1 QoS Background

As network technologies develop rapidly, the IP network changes from a single data network to a multi-service network that integrates data, voice, video, and game services. The data carried on networks grows exponentially. In addition, some services impose high requirements on network bandwidth and delay, and bandwidth gradually becomes the bottleneck of Internet development due to huge difficulty, long period, and high cost in hardware chip development. As a result, network congestion occurs, packets are discarded, the service quality deteriorates, and even services become unavailable.

You must eliminate network congestion to carry out the services on the IP network. The best solution is to increase network bandwidth. However, blindly increasing network bandwidth is not practical in terms of O&M costs.

QoS technology is developed in the context. QoS technology does not increase network bandwidth. Instead, it aims at balancing bandwidth allocation to various services and providing E2E service quality guarantee based on different service requirements within limited bandwidth resources.

2 QoS Indicators

The factors that affect the network service quality need to be learned to improve network quality. Traditionally, factors that affect network quality include link bandwidth, packet transmission delay, jitter, and packet loss ratio. To improve the network service quality, ensure the bandwidth of transmission links, and reduce packet transmission delay, jitter, and packet loss ratio. These factors that affect the network service quality become QoS indicators.

2.1 Bandwidth

The bandwidth, also called throughput, refers to the maximum number of transmitted data bits between two ends within a specified period (1 second) or the average rate at which specified data flows are transmitted between two network nodes. Bandwidth is expressed in bit/s.

The uplink rate and downlink rate are relevant to the bandwidth on a network. The uplink rate refers to the rate at which users send information to a network, and the downlink rate refers to the rate at which a network sends data to users. For example, the rate at which users upload files to a network is determined by the uplink rate, and the rate at which users download files is determined by the downlink rate.

Generally, data transmission capability and network service quality are accompanied by the bandwidth. In other words, a lane is positive to the traffic flow capacity with low traffic jam in a highway. Network users all expect higher bandwidth; however, the O&M costs are higher. Therefore, bandwidth becomes a serious bottleneck as Internet develops rapidly and services become increasingly diversified.

2.2 Delay

The delay refers to the time required to transmit a packet or a group of packets from the transmit end to the received end. It consists of the transmission delay and processing delay.

Voice transmission is used as an example. A delay refers to the period during which words are spoken and then heard. Generally, people are insensitive to a delay of less than 100 ms. If a delay ranging from 100 ms to 300 ms occurs, a speaker can sense slight pauses in the responder's reply, which can seem annoying to both. If a delay longer than 300 ms occurs, both the speaker and responder obviously sense the delay and have to wait for responses. If the speaker cannot wait but repeats what has been said, voices overlap and the quality of the conversation deteriorates severely.

Figure 2-1 Impact of the delay on the network quality

20170410094852522001.png

 

2.3 Jitter

If network congestion occurs, the delays of packets over the same connection are different. The jitter is used to describe the degree of delay change, that is, the time difference between the maximum delay and the minimum delay. In Figure 2-2, employee A sends a voice message to employee B, saying "I will stay, but he will not." It is assumed that each word is a packet. The transmit end divides the voice into six packets and transmits packets at the same interval sequentially. Each packet delay may be different due to the complexity of the IP network. As a result, the receiving interval differs from the sending interval. In addition, employee B may convert the received voice into "Do I want to keep him? No!" because of the tone of the speaker. As a result, it causes semantic misunderstanding.

Figure 2-2 Impact of the jitter on the network quality

20170410094853773002.png

 

Therefore, jitter is an important parameter for real-time transmission, especially for real-time services, such as voice and video, which are zero-tolerant of jitters because jitters will cause voice or video interruptions.

Jitters also affect protocol packet transmission. Specific protocol packets are transmitted at a fixed interval. High jitters may cause flapping of the protocols.

Jitters exist on networks but the service quality will not be affected if jitters do not exceed a specific tolerance. The buffer can alleviate excess jitters but prolongs delays.

2.4 Packet Loss Ratio

The packet loss ratio refers to the ratio of lost packets to total packets. Slight packet loss does not affect services. For example, users are unaware of the loss of a bit or a packet in voice transmission. The loss of a bit or a packet in video transmission may cause the image on the screen to become garbled instantly, but the image can be restored quickly.

TCP is used to transmit data to handle slight packet loss because TCP instantly retransmits the packets that have been lost. If severe packet loss does occur, the packet transmission efficiency is affected. QoS focuses on the packet loss ratio. The network packet loss ration must be controlled within a certain range during transmission.

3 QoS Service Models

How are QoS indicators defined within proper ranges to improve network service quality? The QoS model is involved. The QoS model is not a specific function, but an E2E QoS scheme. For example, intermediate devices may be deployed between two connected hosts. E2E service quality guarantee can be implemented only when all devices on a network use the same QoS service model. International organizations such as the IETF and ITU-T designed QoS model for their concerned services. The following describes three main QoS service models.

3.1 Best-Effort

Best-Effort is the simplest and the earliest service model. In Best-Effort mode, network devices just need to ensure reachable routes between networks without deploying additional functions, and an application can send any number of packets at any time without notifying the network. The network then makes the best effort to transmit the packets but provides no guarantee of performance in terms of delay and reliability.

In an ideal scenario where the bandwidth is sufficient, the Best-Effort model is the simplest service model. Actually, there are limitations. The Best-Effort model is suitable for services that do not require short delay and high reliability, such as the File Transfer Protocol (FTP) and email.

3.2 IntServ Model

The Best-Effort model cannot provide high service quality guarantee for some real-time services, so the IETF put forward the Integrated Services (IntServ) model in RFC 1633 in 1994.

The IntServ model uses the Resource Reservation Protocol (RSVP) as a signaling protocol to notify a network of traffic parameters before an application sends packets to the network. The network reserves resources such as bandwidth and priority for the application based on the traffic parameters. After the application receives an acknowledgement message and confirms that sufficient resources have been reserved, it starts to send packets within the range specified by the traffic parameters. The packets sent by the application must be controlled within the range specified by the traffic parameters. A network node maintains a state for each data flow and takes actions based on this state to ensure guaranteed application performance.

Figure 3-1 IntServ model

20170410094853120003.png

 

In the IntServ model, a network needs to reserve a dedicated path for a specified service. The resource reservation state is called soft state. RSVP periodically sends a large number of protocol packets for detection to ensure that the path is not occupied. Each network element checks whether sufficient resources can be reserved based on these RSVP messages. The path is available only when all involved network elements can provide sufficient resources.

The IntServ model provides E2E service guarantee for services, and has advantages and limitations:

l   Implementation difficulty: The IntServ model requires that all nodes on the E2E network support RSVP.

l   Low resource utilization: One path is reserved for each data flow, which means that one path can only serve one data flow but not for other data flows to be multiplexed. As a result, limited network resources cannot be fully used.

l   Extra bandwidth usage: RSVP sends a large number of protocol packets for periodic updating and detection to ensure that the path is not occupied, which increases the network burden.

3.3 DiffServ Model

The IETF put forward the Differentiated Services (DiffServ) model in 1998 to overcome low scalability of the IntServ model.

Banking services are used as an example. Three user levels are provided: centurion, golden, and common card users. The bank provides different services based on user levels. Centurion card users can enjoy one-to-one service in a private zone; golden card users cannot enjoy these but have the priority to handle service; common card users only handle services in queues. This is the differentiated services provided by a bank.

The DiffServ model classifies traffic on a network into multiple classes based on conditions, or marks traffic with different priorities. This method is similar to the method of classifying users into centurion, golden, and common card users. When network congestion occurs, different classes are processed based on priorities to implement differentiated services. Services of the same class are aggregated and sent to ensure the same QoS indicators including the delay, jitter, and packet loss ratio.

Unlike the IntServ model, the DiffServ model does not require a signaling protocol. In this model, an application does not need to apply for network resources before sending packets. Traffic classification and aggregation are completed on edge nodes. Subsequent devices identify services based on classification and provide corresponding services.

The current network carries various services. The DiffServ model is flexible and tailored for the current network. Therefore, the DiffServ model becomes the main solution in QoS design and applications.

4 Components in the DiffServ Model

The DiffServ model involves the following QoS mechanisms:

l   Traffic classification and marking

Traffic classification and marking are prerequisites for differentiated services. Switches can provide targeted services based on classification.

l   Traffic policing, traffic shaping, and interface-based rate limiting

Traffic policing controls the traffic rate within a bandwidth limit. Traffic policing discards excess traffic when the traffic rate exceeds the limit. It can prevent some services or users from occupying excess bandwidth.

Traffic shaping adjusts the rate of outgoing traffic to reduce bursts so that outgoing traffic can be transmitted to downstream devices at a stable rate to avoid unnecessary packet loss and congestion. Traffic shaping is often used on an interface in the outbound direction.

Interface-based rate limiting controls the total rate of all packets sent or received on an interface. When the packet type does not need to be further classified while the total rate of packets passing through an interface needs to be controlled, interface-based rate limiting can simplify the configuration.

l   Congestion management

Congestion management determines the forwarding order using a specific scheduling algorithm upon network congestion to ensure that the network can be quickly recovered. Congestion management is often used on an interface in the outbound direction.

l   Congestion avoidance

Congestion avoidance monitors network resource usage such as queues and memory buffers. When congestion occurs or aggravates, the system starts to discard packets. Congestion avoidance is used on an interface in the outbound direction.

In conclusion, packet classification is the prerequisite for implementing differentiated services. Traffic policing, traffic shaping, and interface-based rate limiting are used to prevent traffic congestion, whereas congestion management and congestion avoidance are used to eliminate traffic congestion.

The following figure shows the order in which different QoS mechanisms process packet.

Figure 4-1 QoS processing order

20170410094854813004.png

 

5 Application of QoS on an Enterprise Network

On an enterprise network, QoS technologies do not need to be applied to the same device, whereas they should be applied to different positions based on service requirements.

Figure 5-1 Deploying QoS on an enterprise network

20170410094855709005.png

The functions of devices at different layers are as follows:

l   Identifying services at the access layer

The access switch LSW1 functions as a border switch. LSW1 needs to identify, classify, and mark data flows at the access side, and performs congestion management, congestion avoidance, and traffic shaping at the network side.

l   Providing differentiated services at the aggregation or core layer

Interfaces on the aggregation or core switch are configured to trust QoS parameters that are identified at the access layer. Aggregation or core switch performs QoS policies through queue scheduling, traffic shaping, and congestion avoidance based on QoS parameters at the access layer, to ensure that high-priority services are scheduled first.

In practice, deployment of specific functions depends on service requirements. In Figure 5-1, classification and marking can be configured on SwitchA to distinguish packets from different departments, and traffic policing is used on GE1/0/2 in the outbound direction to limit traffic entering the WAN. If you do not need to differentiate packets from different departments, you can directly implement interface-based rate limiting on GE1/0/2 in the outbound direction to limit traffic entering the WAN.

The same QoS technology may have different application scopes depending on locations. In Figure 5-1, if interface-based rate limiting is used on GE0/0/1 and GE0/0/2 of LSW1 in the outbound direction, the maximum bandwidth is guaranteed for each of department 1 and department 2. If interface-based rate limiting is used on GE0/0/1 of SwitchA in the inbound direction, the maximum bandwidth is guaranteed for department 1 and department 2 together.

6 Summary

QoS components do not correspond to QoS indicators one by one. That is, one QoS component cannot ensure a QoS indicator. Actually, QoS components are often combined to ensure the service quality. For example, packet classification and marking provide a basis for implementing differentiated services. Traffic policing, traffic shaping, interface-based rate limiting, and congestion management and congestion avoidance control network traffic and resource allocation from different aspects based on packet classification and marking. Issue 2 will describe the Modular QoS Command-Line Interface (MQC).

This post was last edited by 交换机在江湖 at 2017-04-10 01:51.

This article contains more resources

You need to log in to download or view. No account?Register

x
  • x
  • convention:

gululu
Admin Created Apr 10, 2017 09:51:08 Helpful(0) Helpful(0)

good,thanks!
  • x
  • convention:

Come on!
Gabo
Created Aug 22, 2018 03:24:08 Helpful(0) Helpful(0)

thanks for the documentation, it is completed
  • x
  • convention:

#When you want to succed as much as you want to breathe, then you'll be successful.

Comment

Reply
You need to log in to reply to the post Login | Register

Notice Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " Privacy."
If the attachment button is not available, update the Adobe Flash Player to the latest version!
Login and enjoy all the member benefits

Login and enjoy all the member benefits

Login