How do S series switches implement load balancing when RRPP is deployed

1

For S series switches, each RRPP ring supports only one blocked port, so load balancing cannot be implemented in each ring. You can configure two domains and add ports to the two domains so that different blocked ports in two domains are used, implementing load balancing.

Other related questions:
How do S series switches implement fast switchover when RRPP is deployed
For S series switches, fast RRPP switchover is guaranteed by the switchover mechanism, which is irrelevant to the interval for sending Hello packets. Although the minimum interval for sending Hello packets is 1s, Hello packets are used only for loop detection. The following is the switchover mechanism of an RRPP ring: - If a link in the ring is faulty, the port directly connected to the link goes Down. - The transit node immediately sends a Link-Down packet to the master node to report the link status change. - When receiving the Link-Down packet, the master node considers that the ring fails, so it unblocks the secondary port and sends a packet to instruct other transit nodes to update Forwarding DataBases (FDBs). - After other transit nodes refresh their FDBs, the data stream is switched to a link in the Up state.

How many devices at most can be deployed in an RRPP ring when RRPP is deployed on an S series switch
For S series switches, primary and secondary ports of the master node send Health packets. If the secondary port periodically receives Health packets, the master node considers the RRPP ring in Complete state and blocks the secondary port to eliminate loops. Health packets are forwarded through the chip, whose forwarding speed is high. Theoretically, an RRPP ring supports unlimited devices. However, when many devices are configured in an RRPP ring, it takes a long time to rectify any link or node fault. It is recommended that a maximum of 16 devices be configured in an RRPP ring.

How load balancing is implemented on S series switches when link aggregation is configured
For S series switches (except the S1700), there are two load balancing modes: per-packet load balancing and per-flow load balancing. 1. Per-packet load balancing mode When there are multiple physical links between the two devices of the Eth-Trunk, the first data frame of a data flow is transmitted on one physical link, and the second data frame is transmitted on another physical link. In this case, the second data frame may arrive at the peer device earlier than the first data frame. As a result, packet mis-sequencing occurs. 2. Per-flow load balancing mode This mechanism uses the hash algorithm to calculate the address in a data frame and generates a hash key value. Then the system searches for the outbound interface in the Eth-Trunk forwarding table based on the generated hash key value. Each MAC or IP address corresponds to a hash key value, so the system uses different outbound interfaces to forward data. This mechanism ensures that frames of the same data flow are forwarded on the same physical link and implements flow-based load balancing. Per-flow load balancing ensures the correct sequence of data transmission, but cannot ensure the bandwidth usage. Notes: Currently, S series switches support only per-flow load balancing mode, including the following: 1. Load balancing based on the source MAC address of packets; 2. Load balancing based on the destination MAC address of packets; 3. Load balancing based on the source IP address of packets; 4. Load balancing based on the destination IP address of packets; 5. Load balancing based on the source and destination MAC addresses of packets; 6. Load balancing based on the source and destination IP addresses of packets; 7. Enhanced load balancing for L2, IPv4, IPv6, and MPLS packets based on the VLAN ID and source physical interface number. When you configure load balancing modes, follow these guidelines: The load balancing mode only takes effect on the outbound interface of traffic. If load is unevenly distributed on the inbound interfaces, change the load balancing mode on the uplink outbound interfaces. Configure load balancing to ensure data flow is transmitted on all active links instead of only one link, preventing traffic congestion and ensuring normal service operation. For example, if data packets have only one destination MAC address and IP address, you are advised to configure load balancing based on the source MAC address and IP address. If you implement load balancing based on the destination MAC address and IP address, the data flow may be transmitted on only one link, causing traffic congestion.

Methods used to implement virtual load balancing
Currently, virtual and physical SVN load balancing can be implemented. vLB is used to balance the load of multiple WIs. Physical LB balances the load of WIs to prevent a large number of users from accessing the same WI.

Methods used to deploy load balancers and firewalls
Load balancers and firewalls can be attached to the core switches directly or indirectly. Indirect attachment is recommended. (1) If load balancers and firewalls are attached directly to the core switches, all traffic will pass through them even if unnecessary. This wastes interface bandwidth, increases the risk of network faults, and makes subsequent expansion inconvenient. The advantage is that network speed will be high. (2) If load balancers and firewalls are attached indirectly to the core switches, only part of the traffic will be transmitted to the corresponding load balancers and firewalls after you configure routing policies or dynamic routing instances on the core switches. This reduces the traffic and the risk of network faults, and makes subsequent expansion and optimization more convenient. The disadvantage is that the network will be slower.

If you have more questions, you can seek help from following ways:
To iKnow To Live Chat
Scroll to top