Introduction
The redundant array of independent disks (RAID) is a method of storing the same data in different places of multiple disks.
RAID levels range from RAID 0 to RAID 50, and the commonly used RAID levels include RAID 0, RAID 1, RAID 3, RAID 5, RAID 6, RAID 10, and RAID 50.
Read and Write Principles of Common RAID Levels
How Does RAID 0 Work?
RAID 0 (also referred to as stripe or striping) has the highest storage performance of all RAID levels. In RAID 0 arrays, data is stored on multiple disks, which allows data requests to be concurrently processed on the disks. The concurrent data processing can make the best use of the bus bandwidth, significantly improving the overall disk read/write performance.
Figure 1-1 shows how I/O data requests are distributed into three disks in a logical disk (RAID 0 array) for concurrent processing. RAID 0 allows the original sequential data to be processed on three physical disks at the same time. Theoretically, concurrent operations on three disks improve disk read/write performance threefold. Although the actual read/write speed may be affected by various factors, such as the bus bandwidth, and is lower than the theoretical value, the concurrent transmission speed is definitely higher than the serial transmission speed for large-volume data.
Figure 1-1 RAID 0 data distribution
How Does RAID 1 Work?
RAID 1 (also referred to as mirroring) maximizes data availability and recovery capability. RAID 1 automatically copies all data written to one disk to the other disk in a RAID group.
Figure 1-2 shows how data is processed on a logical disk (RAID 1 array) consisting of two physical disks. In RAID 1, the data written to disk 0 is automatically copied to disk 1. The data is first accessed from disk 0 (the source disk). If the data access succeeds, the data on disk 1 (a mirror disk) will not be accessed. If the data access fails, the data on disk 1 will be accessed. The switch from disk 0 to disk 1 does not interrupt data processing.
Figure 1-2 RAID 1 data distribution
How Does RAID 5 Work?
RAID 5 is a storage solution that balances storage performance, data security, and storage costs. To ensure data reliability, RAID 5 uses the cyclic redundancy check (CRC) mode and stores parity information on every member disk. If a disk in a RAID 5 array is faulty, data on the failed disk can be rebuilt from the data on other member disks in the array.
As shown in Figure 1-3, P0 is the parity data of stripes D0, D1, and D2, P1 is the parity data of stripes D3, D4, and D5, and the rest may be deduced by analogy. RAID 5 does not back up the data stored. Instead, data and its parity information are stored on different member disks in the array. If data on a RAID 5 member disk is damaged, the data can be restored from the remaining data and its parity information.
Figure 1-3 RAID 5 data distribution
How Does RAID 10 Work?
RAID 10 is a combination of RAID 0 and RAID 1. It allows disks to be mirrored (RAID 1) and then striped (RAID 0). RAID 10 is a solution that provides outstanding storage performance (similar to RAID 0) and data security (same as RAID 1).
As shown in Figure 1-4, disks 0 and 1 form subgroup 0, disks 2 and 3 form subgroup 1, and disks in the same subgroup are mirrors of each other. If I/O requests are sent to disks in RAID 10, the sequential data requests are distributed to the two subgroups for processing (RAID 0 mode). At the same time, in RAID 1 mode, when data is written into disk 0, a copy is automatically created on disk 1; when data is written into disk 2, a copy is automatically created on disk 3.
Figure 1-4 RAID 10 data distribution
Comparison of Common RAID Levels
Comparison Between RAID Levels of OceanStor V3 and V5 Series Storage Systems
When selecting a RAID level, take the following into consideration:
Reliability
Read/write performance
Disk utilization
Different RAID levels have different reliability, read/write performance, and disk utilization. Table 1-1 describes the comparison between RAID levels.
Table 1-1 Comparison between RAID levels
RAID Level | Redundancy and Data Recovery Capability | Read Performance | Write Performance | Disk Utilization | Maximum Number of Allowed Faulty Disks |
|---|---|---|---|---|---|
RAID 0 | Data redundancy is not provided. Damaged data cannot be restored. | High | High | The disk utilization is 100%. | 0 |
RAID 1 | High, with full data redundancy. If a chunk fails, the data on the chunk can be recovered using the mirrored chunk. | Relatively high | Low |
| A maximum of N-1 disks can be damaged at the same time (N disks form a RAID 1 array). |
RAID 3 | Relatively high. Each chunk group has one chunk as the parity block. Data on any data block can be recovered using the parity chunk. If two or more chunks fail, the RAID group fails. | High | Low | RAID 3 supports flexible configurations from 2D+1P to 13D+1P. The disk utilization rates of some common RAID 3 configurations are as follows:
| 1 |
RAID 5 | Relatively high. Parity data is distributed on different chunks. In each chunk group, the parity data occupies the space of one chunk. RAID 5 is able to tolerate the failure on only one chunk. If two or more chunks fail, the RAID group fails. | Relatively high | Relatively high | RAID 5 supports flexible configurations from 2D+1P to 13D+1P. The disk utilization rates of some common RAID 5 configurations are as follows:
| 1 |
RAID 6 | Relatively high. Two groups of parity data are distributed on different chunks. In each chunk group, the parity data occupies the space of two chunks. The failure of two chunks is allowed. If three or more chunks fail, the RAID group fails. | Medium | Medium | RAID 6 supports flexible configurations from 2D+2P to 26D+2P. The disk utilization rates of some common RAID 6 configurations are as follows:
| 2 |
RAID 10 | High. Multiple chunks can be faulty. If a chunk fails, the data on the chunk can be recovered using the mirrored chunk. If a data chunk and its mirrored chunk fail simultaneously, the RAID group fails. | Relatively high | Relatively high | The disk utilization is 50%. | A maximum of N disks can be damaged at the same time (2N disks form a RAID 10 array). |
RAID 50 | Relatively high. Parity data of each RAID 5 sub-group is distributed on different chunks, and each RAID 5 sub-group tolerates the failure of only one chunk. If two or more chunks in a RAID 5 sub-group fail, the RAID group fails. | Relatively high | Relatively high |
| 1 |
a: D refers to a data block. b: P refers to a parity block. NOTE:For a RAID policy with the xD+yP configuration, the disk utilization is [x/(x + y)] × 100%. | |||||
Configure RAID policies according to the following rules:
For critical service systems, such as billing systems of carriers and class-A financial online transaction systems, configure RAID 6 (8D+2P) for the performance tier. For non-critical services, configure RAID 5 (8D+1P) for the performance tier.
RAID 6 is recommended for the capacity tier (NL-SAS disk).
Comparison Between RAID Levels of OceanStor Dorado V3 Series Storage Systems
Dorado V3 storage systems use dynamic RAID for redundancy and provide different levels of protection based on the number of parity bits in the RAID group. The storage systems provide three protection levels: RAID 5, RAID 6, and RAID-TP. Table 1-2 compares the three protection levels without considering hot spare space.
Table 1-2 Protection levels of storage pools
Protection Level | Number of Parity Bits | Redundancy and Data Recovery Capability | Maximum Number of Allowed Faulty Disks |
|---|---|---|---|
RAID 5 | 1 | Relatively high. Parity data is distributed on different chunks. In each chunk group, the parity data occupies the space of one chunk. RAID 5 is able to tolerate the failure on only one chunk. If two or more chunks fail, the RAID group fails. | 1 |
RAID 6 | 2 | High. Parity data is distributed on different chunks. In each chunk group, the parity data occupies the space of two chunks. RAID 6 is able to tolerate simultaneous failures on two chunks. If three or more chunks fail, the RAID group fails. | 2 |
RAID-TP | 3 | High. Parity data is distributed on different chunks. In each chunk group, the parity data occupies the space of three chunks. RAID-TP is able to tolerate simultaneous failures on three chunks. If four or more chunks fail, the RAID group fails. | 3 |






