BH620 V2 product description

1

The BH620 V2 is a server blade based on Intel new-generation processor platform Romley. It features excellent computing capability and flexible scalability. The BH620 V2 is installed in the E6000 server shelf (shelf for short) and is managed by the management module (MM) in a centralized manner

Other related questions:
Description on the GCC byte mode of NG WDM products
Question: In the OTN architecture, the ESC mode is adopted. Sometimes, the GCC byte mode needs to be set using the :cm-set-fiberport command. The last parameter of this command indicates the channel rate. 13 denotes three-byte GCC0, 14 denotes the GCC12_9 byte, and 15 denotes the GCC12_18 byte. How are three-byte GCC0, GCC12_9, and GCC_18 calculated? Root cause: None Answer: 1. The channel rate is an equivalent rate. Three-byte GCC0 indicates that the bandwidth provided by GCC0 is 3 x 64K, and GCC12_9 indicates that the total bandwidth provided by GCC1 and GCC2 is 9 x 64K. In the OTN architecture, GCC0 is of two bytes, and the frame structure is always 4 x 4080 bytes. 2. At the 2.5G rate, the bandwidth of GCC0 is calculated using the following formulas: Frame frequency = 2.5G/(4 x 4080 x 8) Bandwidth of GCC0 = 2x frame frequency x 8 Therefore, the approximate bandwidth is as follows: 320K = 5 x 64K Likewise, at the 5G and 10G rates, the approximate bandwidths are 10 x 64K and 20 x 64K respectively. (The length of an OTN frame is fixed, whereas the frame frequency varies.) Due to the limit on CPU channels, the bandwidth provided by GCC0 must be reduced so that the equipment can be used like the previous equipment. That is, GCC0 is used as three bytes only. To be specific, the supervisory bandwidth provided by GCC0 is 3 x 64K. 3. According to the preceding method, at the 2.5G rate, GCC1 and GCC2 together provide 10 x 64K bandwidth. That is, GCC1 and GCC2 are used as nine bytes. GCC1 and GCC2, which are always bound together when used, provide 20 x 64K bandwidth at the 5G rate and 40 x 64K bandwidth at the 10G rate. Suggestion and conclusion: The OTN-framed OTU board supports DCC types GCC0, GCC12_9, and GCC12_18. The OTN 2.5G board supports GCC0 and GCC12_9, and the OTN 5G and OTN 10G boards support GCC0 and GCC12_18.

Differences between the RH2288 V2, RH2288H V2, and RH2288E
The four NICs of the RH2288 V2 are GE NICs on the mainboard while the NICs of the RH2288H V2 are flexible NICs. The number of PCIe slots on riser cards is also different. The RH2288E V2 supports holding guide rails for installation. For details, see their respective user guide.

Differences among RH1288A V2, RH1288 V2, and RH1288 V3
The differences among the RH1288A V2, RH1288 V2, and RH1288 V3 are listed as follows. The servers' specifications are subject to the official website. ? The RH1288A V2 adopts the Huawei Hi1710 management chip (iBMC), and supports Intel? Xeon? E5-2600 v2 series processors, eight DIMM slots, and two LOM GE electrical ports. ? The RH1288 V2 adopts the management software iMana, and supports Intel? Xeon? E5-2600 and E5-2600 v2 series processors, 24 DIMM slots, and GE or 10GE LOM NIC modules. ? RH1288 V3 adopts the Huawei Hi1710 management chip (iBMC), and supports Intel? Xeon? E5-2600 v3 series processors, 16 DIMM slots, and GE or 10GE LOM NIC modules.

Differences between the 4-socket RH5885 V2 and 8-socket RH5885 V2
An 8-socket RH5885 V2 server is made up of two 4-socket RH5885 V2 servers connected through four QPI cables. One of the 4-socket RH5885 V2 servers serves as the master node and the other as the slave node. Hardware differences: ? Rear I/O module: For the 4-socket server, the rear I/O module can be configured with two hard disks and the QPI cables cannot be connected. For the 8-socket server, the rear I/O module cannot be configured with two hard disks and the QPI cables are required. ? Hard disk quantity: 10 (4-socket server); 16 (8-socket server) ? CPU configuration: The 8-socket server must be configured with all CPUs and the 4-socket server can be configured with two or four CPUs. ? DIMM configuration: For the 4-socket server, DIMMs must be symmetrically installed on the memory risers. For the 8-socket server, apart from the symmetry requirements, the DIMMs must be installed on two nodes and the DIMM configuration must be the same on the two memory risers. ? RAID configuration: The 8-socket server requires only one RAID controller card, which must be installed on the master node. Software differences: ? The slave node of the 8-socket server has no BIOS. Therefore, no BIOS information is displayed on iMana. ? The power button on the front panel of the slave node of the 8-socket server is available, but there is no related power control page on the iMana WebUI. ? The iMana of the slave node of the 8-socket server does not provide black box, power-on strategy, system start strategy, and serial port switchover functions.

If you have more questions, you can seek help from following ways:
To iKnow To Live Chat
Scroll to top