How do I set multiple different lun queue depths for different luns on the same HBA?
Environment
- Red Hat Enterprise Linux 6
- Red Hat Enterprise Linux 7
Issue
- We have two different storage vendors which each recommend a different lun queue depth setting. Is it possible to have a different queue depth set for different luns on the same HBA?
- How can I dynamically change the lun queue depth of a disk?
- Is it possible to dynamically set a value higher than what is currently set in the kernel at boot from *.conf file?
Resolution
-
You can change individual device queue depths dynamically (change takes effect immediately, no reboot required), if the device driver for that disk supports it.
cat new-depth-value > /sys/block/sdN/device/queue_depth-
for example, the following is attempted against a disk whose driver doesn't support queue_depth changes so the sysfs entry is read-only:
# echo 2048 > /sys/block/sda/device/queue_depth bash: /sys/block/sda/device/queue_depth: Permission denied -
for example, a disk serviced via qla2xxx does allow dynamic queue_depth changes:
# cat /sys/block/sdt/device/queue_depth 32 # echo 64 > /sys/block/sdt/device/queue_depth # cat /sys/block/sdt/device/queue_depth 64
-
-
The set value can be greater than the queue_depth set either via default or via *.conf file as long as the value is within maximum supported limits of the driver.
- for example, the following Smart Array adapter supports, by default, a queue depth of 1020.
- it can be set lower:
# echo 32 > /sys/devices/pci0000:00/0000:00:01.0/0000:04:00.0/host4/target4:0:0/4:0:0:0/queue_depth # cat /sys/devices/pci0000:00/0000:00:01.0/0000:04:00.0/host4/target4:0:0/4:0:0:0/queue_depth 32- but not higher than the maximum allowed by the driver which, in this case, also happens to be the default value of 1020.
# echo 1024 > /sys/devices/pci0000:00/0000:00:01.0/0000:04:00.0/host4/target4:0:0/4:0:0:0/queue_depth # cat /sys/devices/pci0000:00/0000:00:01.0/0000:04:00.0/host4/target4:0:0/4:0:0:0/queue_depth 1020- setting the queue_depth to much higher values than the default value may cause resource issues within the HBA hardware and adversely effect performance when under heavy io loads
- setting the queue_depth to high values may have adverse performance effects within storage when under heavy io loads, for example a high queue_depth across multiple luns could cause frequent QUEUE FULL conditions within the storage controller.
- as with any tuning, changing the queue_depth should be done with care and thoroughly tested after each change
- This change is not permanent and will be changed back to the default queue depth of the driver upon reboot. See What is the HBA queue depth, how to check the current queue depth value and how to change the value? for information on how to change driver queue depth defaults for lpfc and qla2xxx drivers.
-
Alternatively, if it is desired to change the lun queue depth only on selected devices, udev rules can be created to do this upon each reboot. See From where does default 'scsi timeout' value get set for scsi devices? for an an example of how to change /sys/block/sdN/timeout. This type of udev rule can easily be changed to change /sys/block/sdN/device/queue_depth instead. Additionally, an example udev rule that sets the depth on a per-wwid basis is provided below.
ACTION!="add|change", GOTO="qdepth_end" KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}!="partition", ENV{ID_SERIAL}=="<wwid1>", ATTR{device/queue_depth}="8" KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}!="partition", ENV{ID_SERIAL}=="<wwid2>", ATTR{device/queue_depth}="8" KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}!="partition", ENV{ID_SERIAL}=="<wwid3>", ATTR{device/queue_depth}="8" LABEL="qdepth_end"
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.