Got it

Why Is the Read Performance of a Red Hat Host Is Low During the Write Performance Test Using a dd Command?

105 0 0 0 0

Question

In a storage system, eight NL-SAS disks form a RAID 6 group. 

LUNs are created on the RAID 6 group. 

The default prefetch policy is intelligent prefetch, and the write policy adopts write back with mirroring. 

On a Red Hat host, a dd command is executed to test the read performance of raw LUNs mapped from the storage system.

1

The command output indicates that the LUNs have low I/O read performance (only about 150 Mbit/s). Why?



Answer

Cause:

On the storage array side, a user views the LUN performance data and finds that I/O pressure on LUNs is light with only 1024 IOPS. 

This fault is not caused by the performance bottleneck of the storage array.

2


Solution:

Perform the following steps to optimize the performance:

1. Log in to the Red Hat host as user root, and run the dd if=/dev/sds of=/dev/null bs=256K command to set a block size.

3


2. Run the dd if=/dev/sds of=/dev/null bs=256K iflag=direct command to set the I/O mode to direct I/O (Set a O_DIRECT flag when starting a block device).

4


Supplementary Information: Optimization Methods at the Linux Block Device Layer

● Changing block device scheduling algorithms: Run the echo noop /sys/block/sd*/queue/scheduler command, 

    where sd* can be sdc, sdd, or sds based on actual conditions.

    Application scenario: In most existing cases, the scheduling algorithms of host block devices are set to noop, so that the     sequencing and consolidation of I/Os can be implemented on storage systems rather than on host block devices.


● Adjusting prefetch policies: Modify read_ahead_kb in the command /sys/block/sd*/queue/read_ahead_kb.

    Application scenario: For random services, services with a low hit ratio, and DIRECT IOx, set a small value for                 

    read_ahead_kb. For services with sequential small I/Os, set a large value for read_ahead_kb.


● Adjusting the maximum I/O size in block devices: Run the max_sectors_kb command to adjust the maximum size, 

    which is 1024 KB by default.

    Application scenario: When large I/Os occur, 

    you can adjust the maximum I/O size to prevent large I/Os from being split at the block device layer.


● Modifying the block device queue depth: Run the /sys/block/sd*/queue/nr_requests command.

    Application scenario: When the pressure of block devices is heavy, 

    the block device queue depth can be modified to avoid a performance bottleneck.


● Setting the direct I/O mode: (Set an O_DIRECT flag when starting a block device.) 

    Run the dd oflag/iflag=direct if=/dev/sd* of=/dev/nullbs=512 command.

    Application scenario:

    This optimization method applies to applications with cache resources such as databases, 

    because direct I/Os do not pass through file system buffers and therefore do not occupy any free memory in systems.


Verified Versions

The fault occurs on the host side.

The solution provided in this section is applicable to all storage products.

Comment

You need to log in to comment to the post Login | Register

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " User Agreement."

My Followers

Login and enjoy all the member benefits

Login

Block
Are you sure to block this user?
Users on your blacklist cannot comment on your post,cannot mention you, cannot send you private messages.
Reminder
Please bind your phone number to obtain invitation bonus.