Can the AR implement link aggregation or load balancing when multiple AR uplinks are used to access the Internet

2

If physical interfaces of multiple uplinks are of the same type (GE or Ethernet interfaces), these interfaces can constitute an Eth-Trunk. The bandwidth of the Eth-Trunk is the sum of bandwidths of multiple physical links. For example, the bandwidths of two Ethernet interface are 200 Mbit/s (100 Mbit/s + 100 Mbit/s).

If two service interfaces are used, for example, bandwidths of two carriers are leased, among which one is 4 Mbit/s and the other is 2 Mbit/s, 6 Mbit/s bandwidth for Internet access cannot be implemented. If ECMP is configured on the two service interfaces, Internet access through one uplink may be faster than Internet access through the two service interfaces. This is because two carriers use different networks on which the delay and jitter are different. Mis-sequencing of TCP packets (most services use TCP) may occur. As a result, TCP packets are reassembled and retransmitted, or even TCP connections are torn down and reestablished. The Internet access speed is slow and web pages cannot be opened. Use the following methods to solve the problem:
-Use specific routes to different services, do not use ECMP, or use only one uplink interface.
-Use a traffic policy containing redirection to next hop addresses so that services are not load balanced.

Other related questions:
Does an AR router that accesses the Internet through multiple upstream interfaces support link aggregation or load balancing
If the multiple upstream interfaces are physical interfaces of the same type (such as GE interface or Eth interface), the bandwidth of these interfaces can be aggregated, which can be deemed equivalent to an Eth-Trunk. The bandwidth of the aggregated interface equals to the sum of the bandwidths of the physical links. For example, If two Eth interfaces are aggregated, the total bandwidth of the aggregated interface is 200 Mbit/s (100 Mbit/s + 100 Mbit/s). However, if the multiple upstream interfaces are service interfaces, the total bandwidth after aggregation cannot reach the sum of the bandwidths of the interfaces. For example, if two broadband interfaces provided by different vendors, whose bandwidths are 4 Mbit/s and 2 Mbit/s respectively, are aggregated, the rate of the aggregated bandwidth cannot reach 6 Mbit/s. In addition, if the two upstream interfaces use the load balancing ECMP, the rate of the aggregated interface further decreases and may even be lower than the rate of a single upstream interface. The reason is that the interfaces provided by two vendors belong to two different networks, whose packet transmission delays and jitters are different. If the two interfaces are aggregated, the response packets corresponding to TCP link packets (which are used by most services) are disordered, which results in packet reassembly, packet retransmission, or even the disconnection and re-establishment of TCP links. As a result, users cannot access Internet with a high rate or open web pages. In this case, you can take the following measures to try to resolve the problem: - Use specific routes to distinguish services. Do not use ECMP or use only one upstream interface. - Use the traffic policy to redirect the next hop to distinguish services, and do not configure load balancing for service.

Can load balancing be implemented when multiple interfaces of the AR router are mapped to internal servers
When multiple interfaces of the AR router map to internal servers, load balancing can be implemented. You can run the load-balance { dst-ip | dst-mac | src-ip | src-mac | src-dst-ip | src-dst-mac } command.

How load balancing is implemented on S series switches when link aggregation is configured
For S series switches (except the S1700), there are two load balancing modes: per-packet load balancing and per-flow load balancing. 1. Per-packet load balancing mode When there are multiple physical links between the two devices of the Eth-Trunk, the first data frame of a data flow is transmitted on one physical link, and the second data frame is transmitted on another physical link. In this case, the second data frame may arrive at the peer device earlier than the first data frame. As a result, packet mis-sequencing occurs. 2. Per-flow load balancing mode This mechanism uses the hash algorithm to calculate the address in a data frame and generates a hash key value. Then the system searches for the outbound interface in the Eth-Trunk forwarding table based on the generated hash key value. Each MAC or IP address corresponds to a hash key value, so the system uses different outbound interfaces to forward data. This mechanism ensures that frames of the same data flow are forwarded on the same physical link and implements flow-based load balancing. Per-flow load balancing ensures the correct sequence of data transmission, but cannot ensure the bandwidth usage. Notes: Currently, S series switches support only per-flow load balancing mode, including the following: 1. Load balancing based on the source MAC address of packets; 2. Load balancing based on the destination MAC address of packets; 3. Load balancing based on the source IP address of packets; 4. Load balancing based on the destination IP address of packets; 5. Load balancing based on the source and destination MAC addresses of packets; 6. Load balancing based on the source and destination IP addresses of packets; 7. Enhanced load balancing for L2, IPv4, IPv6, and MPLS packets based on the VLAN ID and source physical interface number. When you configure load balancing modes, follow these guidelines: The load balancing mode only takes effect on the outbound interface of traffic. If load is unevenly distributed on the inbound interfaces, change the load balancing mode on the uplink outbound interfaces. Configure load balancing to ensure data flow is transmitted on all active links instead of only one link, preventing traffic congestion and ensuring normal service operation. For example, if data packets have only one destination MAC address and IP address, you are advised to configure load balancing based on the source MAC address and IP address. If you implement load balancing based on the destination MAC address and IP address, the data flow may be transmitted on only one link, causing traffic congestion.

Link load balancing on an AR
Load balancing means that concurrent access requests and data traffic are evenly distributed among multiple devices, greatly improving service processing capabilities. Load balancing devices in the Cache system are classified into load balancing subsystems (LBSs) and global load balancers (GLBs) based on deployment scenarios. -- An LBS is used in in-band policy-based routing (PBR) deployment scenarios. PBR on a router of the live network is added to direct upstream and downstream HTTP traffic to the F5. The traffic is then load balanced to the Cache system by the LBS. -- A GLB is used in in-band DNS redirection deployment scenarios. DNS requests of intranet users are parsed into HTTP proxy IP addresses and are then redirected to the Cache system.

Does an AR support two uplink interfaces in load balancing mode
If an AR has two WAN interfaces configured, the two uplink interfaces work in load balancing mode. 1. Configure two equal-cost static routes. 2. Configure policy-based routing to change the forwarding path of packets.

If you have more questions, you can seek help from following ways:
To iKnow To Live Chat
Scroll to top