Hello all,
This case is about seeing the solution of changing old storage to Dorado 5000 V6 storage.
ISSUE DESCRIPTION
We plan to deploy SAN Storage in 2022, so we would like to coordinate with you on possible options. We're currently using classic block storage (2 x Hitachi storage, 150 TB net SSD storage each) will be replaced.
DEVICE MODEL
Based on the requirement of replacing two storage systems of Hitachi, I would like to recommend a solution of Huawei OceanStor Dorado 5000 V6 all-flash system with all NVMe flash disks.
Within a 2U enclosure, the OceanStor Dorado 5000 V6 storage system offers two controllers, 32Gb/s FC, and up to 200TB physical usable capacity. In addition, the system supports data deduplication and compression. According to the experience of other Volkswagen users, the data reduction ratio can reach 3:1 in most cases.
You can learn more information about this product from the following links:
ANALYSIS
If it is possible, I would like to know more information as below, then I can make a proposal for you based on your feedback.
1. Performance expectation: IOPS/Throughput;
On average, the requirement is 10,000 IOPS which can reach a maximum of 30,000 IOPS at the peak.
2. Data protection: cross-site active-active/snapshot/clone/replication?
We plan to use the Storage systems active / active in a Metrocluster environment with 2 nodes and a quorum server. Future use of further SAN options is currently not planned.
3. What Frontend ports number do we need?
According to current planning, we need 4 FC front-end ports to connect to our Brocade SAN switches.
4. Expected maximum capacity expansion capability in the future;
We are currently using 150 TB per storage system and we are not assuming large growth as the strategy is moving towards cloud infrastructure.
I made a rough calculation based on the configuration we made:
Hardware Configuration | 2 x OceanStor Dorado5000 V6 (36x3.84TB NVMe SSD, 32Gb/s Fiber channel); | ||
Deployment type | Single deployment without Active-active solution performance for each system | Sync-mirror with transparent failover (<=10KM distance) Performance for each system | |
8KB size, 100% Random, | 50% Read, 50% Write | IOPS: 290,000 @ 1ms Throughput: 2.3 GB/s | IOPS: 200,000 @ 1ms Throughput: 1.6 GB/s |
70% Read, 30% Write | IOPS: 380,000 @ 1ms Throughput: 2.9 GB/s | IOPS: 280,000 @ 1ms Throughput: 2.2 GB/s | |
32KB size, 100% Random, | 50% Read, 50% Write | IOPS: 110,000 @ 1ms Throughput: 3.3 GB/s | IOPS: 73,000 @ 1ms Throughput: 2.3 GB/s |
70% Read, 30% Write | IOPS: 130,000 @ 1ms Throughput: 4.0 GB/s | IOPS: 96,000 @ 1ms Throughput: 3.0 GB/s | |
64KB size, 100% Random, | 50% Read, 50% Write | IOPS: 54,000 @ 1ms Throughput: 3.5 GB/s | IOPS: 45,000 @ 1ms Throughput: 2.8 GB/s |
70% Read, 30% Write | IOPS: 660,000 @ 1ms Throughput: 4.2 GB/s | IOPS: 560,000 @ 1ms Throughput: 3.5 GB/s | |
If the sync-mirror is configured, the performance is impacted by the distance. For every 10KM increase in distance, the latency of write operation will increase by 0.1~0.2ms, while the latency of read operation will not be affected because it is done locally.
From my perspective, this solution is capable to fulfill the performance requirement of your IT environment.
Thanks.

