Question
In a storage system, eight NL-SAS disks form a RAID 6 group.
LUNs are created on the RAID 6 group.
The default prefetch policy is intelligent prefetch, and the write policy adopts write back with mirroring.
On a Red Hat host, a dd command is executed to test the read performance of raw LUNs mapped from the storage system.

The command output indicates that the LUNs have low I/O read performance (only about 150 Mbit/s). Why?
Answer
Cause:
On the storage array side, a user views the LUN performance data and finds that I/O pressure on LUNs is light with only 1024 IOPS.
This fault is not caused by the performance bottleneck of the storage array.

Solution:
Perform the following steps to optimize the performance:
1. Log in to the Red Hat host as user root, and run the dd if=/dev/sds of=/dev/null bs=256K command to set a block size.

2. Run the dd if=/dev/sds of=/dev/null bs=256K iflag=direct command to set the I/O mode to direct I/O (Set a O_DIRECT flag when starting a block device).

Supplementary Information: Optimization Methods at the Linux Block Device Layer
● Changing block device scheduling algorithms: Run the echo noop /sys/block/sd*/queue/scheduler command,
where sd* can be sdc, sdd, or sds based on actual conditions.
Application scenario: In most existing cases, the scheduling algorithms of host block devices are set to noop, so that the sequencing and consolidation of I/Os can be implemented on storage systems rather than on host block devices.
● Adjusting prefetch policies: Modify read_ahead_kb in the command /sys/block/sd*/queue/read_ahead_kb.
Application scenario: For random services, services with a low hit ratio, and DIRECT IOx, set a small value for
read_ahead_kb. For services with sequential small I/Os, set a large value for read_ahead_kb.
● Adjusting the maximum I/O size in block devices: Run the max_sectors_kb command to adjust the maximum size,
which is 1024 KB by default.
Application scenario: When large I/Os occur,
you can adjust the maximum I/O size to prevent large I/Os from being split at the block device layer.
● Modifying the block device queue depth: Run the /sys/block/sd*/queue/nr_requests command.
Application scenario: When the pressure of block devices is heavy,
the block device queue depth can be modified to avoid a performance bottleneck.
● Setting the direct I/O mode: (Set an O_DIRECT flag when starting a block device.)
Run the dd oflag/iflag=direct if=/dev/sd* of=/dev/nullbs=512 command.
Application scenario:
This optimization method applies to applications with cache resources such as databases,
because direct I/Os do not pass through file system buffers and therefore do not occupy any free memory in systems.
Verified Versions
The fault occurs on the host side.
The solution provided in this section is applicable to all storage products.