Method used to plan the stripe depth for LUNs

2

Method used to plan the stripe depth for LUNs:
Sizes of I/Os delivered by the hose vary with service applications. To improve I/O performance of the storage system, you can choose a proper stripe depth to match the size of I/Os delivered by the host.

Other related questions:
Method used to set the LUN stripe depth
1. The stripe depth can only be set during LUN creation and cannot be modified after being created. 2. Go to the Create LUN dialog box. In the navigation tree, click the storage system node and choose Storage Resources > LUNs. In the right function pane, choose Create > LUN. The Create LUN dialog box is displayed. Set the parameters in the Create LUN dialog box.

Feature of stripe depth of LUNs
The stripe depth refers to the size of blocks in a stripe of a disk array that uses striped data mapping. It also refers to the size of consecutively addressed virtual blocks mapped to a consecutively addressed disk blocks on a single member extent of a disk array. Therefore, choose the most appropriate stripe depth based on the size of data I/Os. If the storage system is used to store a large number of sequential data, such as media data, a large stripe depth is recommended. The recommended stripe depth is more than 64 KB. If the storage system is used for storing a large number of random data, such as the transaction processing data, a smaller stripe depth is recommended. The recommended stripe depth is 32 KB.

Method used to plan LUN read/write policies
You can plan LUN read/write policies as follows: 1. Planning the write policy You can select the following write policies during LUN creation: write through, write back and mirroring, and write back without mirroring, mandatory write back and mirroring, and mandatory write back without mirroring. The following is the differences among write policies: Write through: writes data to disks. Each write operation must access the disk, contributing to low performance but high reliability. Write back: writes data to the cache and then write data to disks when there are idle host I/Os. Each write operation does not access the disk with high performance but low reliability. Write back and mirroring: writes data to both the local cache and peer cache simultaneously. Write back without mirroring: writes data to the local cache. Mandatory write back and mirroring: Storage devices must write data to the local cache and peer cache simultaneously. Mandatory write back without mirroring: Storage device must write data to the local cache. 2. Planning prefetch policies You can select the following prefetch polices: intelligent prefetch, constant prefetch, variable prefetch, and non-prefetch.

Method used to format a LUN on the storage
1. The LUN must be formatted after the LUN is created on the storage device. 2. After the LUN is created on the storage device, it is automatically formatted on the storage device without manual intervention. 3. The used LUNs cannot be formatted manually.

Method used to plan performance of the RAID group
You can plan the RAID group performance as follows: Different RAID levels of vary in the following items: 1. Read and write performance. 2. Disk utilization. 3. Application scenarios. When selecting a RAID level for specific applications, consider performance, data amount to be stored, and disk utilization. The following conclusions can help in selecting RAID levels: a. If you do not consider the redundant data, RAID 0 has the best read and write performance. b. If you require data redundancy and good system performance regardless of disk costs, RAID 1 or RAID 10 can be optimal choices. c. If you require redundant data, good system performance, and cost-effective disks, RAID 3, RAID 5, RAID 6, and RAID 50 can be optimal choices.

If you have more questions, you can seek help from following ways:
To iKnow To Live Chat
Scroll to top