What is the HBA queue depth, how to check the current queue depth value and how to change the value ?
Environment
- Red Hat Enterprise Linux (RHEL) 9
- Red Hat Enterprise Linux (RHEL) 8
- Red Hat Enterprise Linux (RHEL) 7
- Red Hat Enterprise Linux (RHEL) 6
- Red Hat Enterprise Linux (RHEL) 5
Issue
- How to check the current queue depth value of Qlogic HostBusAdapter (HBA) and change the value?
- What is the Fiber Channel HBA queue depth, how to check the current queue depth value?
Resolution
Changing the queue depth via QLogic HBA driver
Using this approach, an unload/load of the driver or a reboot of the system is required. This changes the value of the ql2xmaxqdepth option in the qla2xxx driver:
[root@host ~]# modinfo qla2xxx | grep ql2xmaxqdepth
parm: ql2xmaxqdepth:Maximum queue depth to report for target devices. (int)
-
The following entry needs to be added to
/etc/modprobe.conf[for RHEL5] or a file in directory/etc/modprobe.dneeds to be created [for RHEL6,7] (for example,/etc/modprobe.d/qla2xxx.conf) to add the below entry and theninitrd/initramfsimage needs rebuild to make the change permanent:options qla2xxx ql2xmaxqdepth=16 -
If you are using RHEL 5 please note the
/etc/modprobe.conffile may additionally need this entry:alias scsi_hostadapter1 qla2xxx -
Please refer to How do I rebuild the initial ramdisk image in Red Hat Enterprise Linux? for more information. When next time the system boots, make sure it boots with the newly built
initrd/initramfs. -
QLogic provides a second configurable that can aid in decreasing SAN saturation, execution throttle. More about this parameter can be found here: What is QLogic execution throttle and how does it relate to queue depth?. Execution throttle must be changed within the card's configuration utility.
Changing the queue depth via Emulex HBA driver
Using this approach, an unload/load of the driver or a reboot of the system is required. The following options to influence the queue depth:
[root@host ~]# modinfo lpfc|grep queue_depth
parm: lpfc_lun_queue_depth:Max number of FCP commands we can queue to a specific LUN (uint)
parm: lpfc_hba_queue_depth:Max number of FCP commands we can queue to a lpfc HBA (uint)
-
These options can be used in
/etc/modprobe.confor a file in directory/etc/modprobe.dneeds to be created [for RHEL6,7] (for example,/etc/modprobe.d/lpfc.conf) and theinitrd/initramfsimage needs rebuild to make the change permanent, then reboot. Please refer to How do I rebuild the initial ramdisk image in Red Hat Enterprise Linux? for more information.options lpfc lpfc_lun_queue_depth=16
Changing the queue depth of VMware PVSCSI HBA
- Please refer to the following article for detailed information about how to change queue depth value for VMware PVSCSI HBA:
How to change the queue depth on VMware virtual guest system running RHEL 6.3?
Changing the queue depth of the fnic HBA driver
- Please refer to How to find and change fnic queue depth?.
To change the queue depth on a per device basis
- See How do I set multiple different lun queue depths for different luns on the same HBA? on how to change the queue depth on a per device basis. This method is vendor and transport agnostic -- it should work with just about any scsi host driver that supports dynamically changing scsi queue_depth.
Root Cause
-
The queue depth describes to the number of I/O requests that are "in flight", so have been requested but that have not yet been acknowledged to have been completed, when communicating with a SAN storage. These requests can be configured per single Logical Unit Number (LUN) that is accessed, or based on the HBA. The maximum queue depth describes the value of requests which is allowed to be in flight at maximum. The maximum queue depth setting can significantly influence the storage performance.
-
The maximum queue depth should be chosen carefully. Low values can lead to bad I/O performance. High values can also lead to bad performance, in having the SAN target not using caches/scheduling in an optimal way. Not only the queue depth of a single HBA, but the queue depth of all HBA's connected to a storage port on the SAN target influence each others performance. The vendor of the SAN target might have recommendations for the maximum queue depth to be used. Further relevant factors are:
- the number of HBA ports connected to the target
- the I/O pattern generated by applications running on the system
Diagnostic Steps
How can I practically verify the qdepth setting?
After setting queue_depth to 1, blktrace can be used to verify that the outstanding in flight count at the driver never exceeds 2.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.