FAQ-QoS of NE05E&08E

Created Sep 24, 2016 18:40:41Latest reply Sep 24, 2016 18:40:48 982 1 0 0

You can collect traffic statistics on packets on which QoS is performed and view the statistics results through the corresponding display commands.
This section describes the QoS functions supported by the NE.
DiffServ Model

Multiple service flows can be aggregated into a behavior aggregate (BA) and then processed based on the same per-hop behavior (PHB). This simplifies the processing and storage of services.
On the DiffServ core network, packet-specific QoS is provided. Therefore, signaling processing is not required.

Simple Traffic Classification

The NE supports simple traffic classification on both physical and logical interfaces, including:

  • Ethernet interface, Ethernet sub-interface, Layer 2 Ethernet interface
  • GE interface, GE sub-interface carrying VLL or VPLS services, Layer 2 GE interface
  • Eth-Trunk interface, Eth-Trunk sub-interface, Layer 2 Eth-Trunk interface
  • VE interface
The NE also supports forced traffic classification, which is independent of the DiffServ domain mapping table. Based on the interface, Eth-Trunk interface, and Eth-Trunk sub-interface views, users can directly specify CoSs and drop priorities for packets. In this manner, received packets can directly enter the queues with the specified CoSs.

Complex Traffic Classification

During complex traffic classification, packets are classified by source/destination address, source/destination interface number, or protocol type. Generally, complex traffic classification is applied on network edges. Traffic is classified based on certain rules and behavior (traffic control or resource allocation) is performed on traffic of the same class, enabling class-based traffic policing, traffic shaping, and congestion avoidance, and providing differentiated service for user services.
The NE supports the following complex traffic classification features:

  • Allows users to configure complex traffic classification in the ingress direction at UNI-side interfaces. UNI-side interfaces include main interfaces, sub-interfaces, interfaces+VLANs, trunk interfaces, trunk sub-interfaces, and ML-PPP interfaces.
  • Classifies traffic by source MAC address, destination MAC address, ID of the protocol carried at the link layer, VLAN, or 802.1p priority in Ethernet packet headers.
  • Classifies traffic by IP priority/DSCP/ToS value, source IP address prefix, destination IP address prefix, IP packet bearer protocol ID, fragmentation flag, TCPSYN flag, TCP/UDP source interface number or interface number range, TCP/UDP destination interface number or interface number range, ICMP flag, or time range flag in IPv4 packet headers.
  • Performs behavior on classified traffic, such as CAR, Permit/Deny, remarking CoS in the NE , remarking user packet priority, traffic statistics collection, and service mirroring.

Traffic Policing

CAR is mainly used for rate limiting. With CAR enabled, a token bucket is used to measure the data flows that pass through an interface, and only the packets assigned tokens can pass through the interface in the specified period of time. In this manner, the traffic rate at the interface can be controlled.
CAR is usually applied on the edge of a network to ensure that core devices process data properly.

Queue Scheduling

The NE supports PQ and WFQ for queue scheduling on interfaces.
Packets of different priorities are mapped into different queues. Round robin (RR) is used on each interface for queue scheduling.
Priority queuing (PQ) classifies queues into four types: top queue, middle queue, normal queue, and bottom queue, which are ordered in descending order of priorities. PQ always allows packets in a queue with a higher priority to be sent preferentially. Specifically, the NE first sends packets in the top queue. After all packets in the top queue are sent, the NE sends packets in the middle queue. Similarly, the NE sends packets in the normal queue only after all packets in the middle queue are sent, and sends packets in the bottom queue only after all packets in the normal queue are sent. In this manner, packets of key services are processed preferentially when congestion occurs on the network. Packets of common services are processed when the network is idle. This scheduling method ensures the quality of key services and fully utilizes network resources.
Weight fair queuing (WFQ) is a complex queuing process, which ensures that services with the same priority are treated equally and services with different priorities are treated based on weights. WFQ weights services based on their requirements for the bandwidth and delay. Weights are determined by the IP precedence in IP packet headers. Packets in the same flow are placed in the same queue using the Hash algorithm. When flows enter queues, WFQ automatically places different flows in different queues using the Hash algorithm. When flows leave queues, WFQ allocates bandwidths to the flows based on IP precedence of the flows. The smaller the IP precedence value of a flow, the smaller the bandwidth allocated to the flow. In this manner, services of the same precedence are treated equally, and services of different precedence are treated based on their weights.

Congestion Avoidance

Congestion avoidance is a traffic control mechanism used to avoid network overload by adjusting network traffic. With this mechanism, the NE can monitor the usage of network resources (such as queues and buffer) and discard packets when network congestion intensifies.
Random early detection (RED) and weighted random early detection (WRED) are usually used for congestion avoidance.
The RED algorithm sets the upper and lower limits for each queue and specifies the following rules:

  • When the length of a queue is below the lower limit, no packet is discarded.
  • When the length of a queue exceeds the upper limit, all the incoming packets are discarded.
  • When the length of a queue is between the lower and upper limits, incoming packets are discarded randomly. A random number is set for each incoming packet, and the random number is compared with the drop probability of the queue. The packet is discarded when the random number is greater than the drop probability. The longer the queue, the higher the drop probability. The drop probability has an upper limit.
Unlike RED, the random number in WRED is based on the IP precedence of IP packets. WRED keeps a lower drop probability for the packets that have higher IP precedence.
RED and WRED employ the random packet drop policy to avoid global TCP synchronization. The NE uses WRED for congestion avoidance.
The NE supports congestion avoidance in the outbound direction of an interface. The WRED template is applied in the outbound direction.
The NE supports congestion avoidance based on services. Eight service queues (BE, AF1, AF2, AF3, AF4, EF, CS6, and CS7) are reserved on each interface. Packets of different colors (red, yellow, and green) correspond to different drop precedence.


The NE supports the following HQoS functions:

  • Provides three levels of scheduling to ensure diverse services.
  • Sets parameters such as WRED, low delay, SP/WFQ, CBS, PBS, and statistics function for each queue.
  • Sets parameters such as the CIR, PIR, number of queues, and scheduling algorithm for each user.
  • Provides a complete traffic statistics function. Users can view the bandwidth usage of services and properly plan bandwidth for services by analyzing traffic.
  • Supports HQoS in the VPLS, L3VPN, and VLL scenarios.
  • Supports interface- and VLAN-based HQoS.

QoS for Ethernet

The NE can perform simple traffic classification based on the 802.1p field in VLAN packets. On the ingress PE, the 802.1p field in a Layer 2 packet is mapped to the precedence field defined by the upper layer protocol, such as MPLS EXP. Then, DiffServ can be implemented for packets on the backbone network. On the egress PE, the precedence field of the upper layer protocol is mapped back to the 802.1p field.


  • ATM traffic scheduling
    This function is applicable to five types of traffic: CBR, RTVBR, NRTVBR, UBR, and UBR+.
    Traffic congestion management is supported. The sustainable cell rate (SCR) of services is guaranteed in the priority sequence of CBR (PCR) > RTVBR > NRTVBR > UBR+ > UBR. If there is idle bandwidth after SCR is guaranteed for all services, the idle bandwidth is allocated to RTVBR, NRTVBR, UBR, and UBR+ services in the proportion of 13:1:1:1. Tail drop is also supported.
    Uplink and downlink UPC/NPC control is supported.
  • Forced traffic classification
    Forced traffic classification is supported. You can run a command to configure forced traffic classification on the upstream interface to set the precedence and color for traffic. Then, the traffic is forwarded to the downstream interface carrying the specified precedence and color.
    Forced traffic classification is supported on serial sub-interfaces, IMA sub-interfaces, and ATM Bundle interfaces.

Parent topic: Service Features
  • x
  • convention:

Wayne  Moderator   Created Sep 24, 2016 18:40:48 Helpful(0) Helpful(0)

FAQ-QoS of NE05E&08E
  • x
  • convention:


You need to log in to reply to the post Login | Register

Notice:To ensure the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but not limited to politically sensitive content, content concerning pornography, gambling, drug abuse and trafficking, content that may disclose or infringe upon others' intellectual properties, including commercial secrets, trade marks, copyrights, and patents, and personal privacy. Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see“ Privacy Policy.”
If the attachment button is not available, update the Adobe Flash Player to the latest version!
Fast reply Scroll to top