SQL Server and HBA Queue Depth Mashup

Using SAN storage for your SQL Server instance?  If so, you should take into consideration the host bus adapter (HBA) queue depth setting.  In a nutshell, a HBA is used to connect your server to the SAN.  The HBA Queue depth setting throttles the maximum amount of I/O operations that can simultaneously flow to the SAN from the HBA port.  How you modify this setting depends on the brand and model of your HBA.  If the queue depth setting value is too low, your SQL Server instance I/O throughput can suffer.  If the queue depth value is too high, you could hamper performance on the SAN as a whole – particularly in circumstances where multiple servers “open the floodgates” by concurrently increasing queue depths past a recommended level.

Finding a one-size-fits-all recommendation from the SAN vendor may be difficult and really depends on the applications you are running on the server.  SQL Server DBAs should work with their company’s SAN administrator to make sure you find a healthy balance between your SQL Server instance’s overall I/O throughput and the SAN’s overall capacity.

After researching this topic, I found that opinions vary on the best setting.  As with any complex system, the correct queue depth value really depends on several factors, such as the number of concurrent hosts connecting to the SAN, LUNs involved, HBA brand/throughput capabilities, number of HBAs on the host and more. Depending on the HBA model, the default value of the Queue Depth is often 32, and there are varying opinions as to whether this value is too low. 

So in this blog I’ll call out a few key findings across different sources regarding the recommended HBA queue depth in conjunction with SQL Server:

Source: Tuning your SAN: Too much HBA Queue Depth?

Christian Bolton used SQLIOSim to test varying queue depths using dual-path Emulex HBAs.  He found that a queue depth of 64 provided the highest number of concurrent I/Os (versus 32 and 128).  At 128, he saw a 50% decay in concurrent I/Os handled.


Source: Predeployment I/O Best Practices

Mike Ruthruff doesn’t describe queue depth testing in this White Paper, however he does acknowledge that increasing the queue depth can be necessary to support the required number of outstanding I/Os for a SQL Server instance. He also mentions that the default queue depth is usually not high enough for a SQL Server instance (default ranging from 8 to 32), and per his testing he found “substantial gains” with a queue depth of 64 and higher.


Source: SQL Server and SANs: The QueueDepth Setting of a Host Bus Adapter (HBA)

Linchi Shea tested the impact of HBA Queue Depth on 8K Random Reads (using SQLIO.exe) and found that the default value of 32 using an Emulex card performed best.


Source: Storage Top 10 Best Practices

In this Top 10 list, the SQL CAT team doesn’t detail specific queue depth values, but the article does confirm that “queue depth defaults are usually not deep enough to support SQL Server IO volumes.”


Source: SAP with Microsoft SQL Server 2005:

Best Practices for High Availability, Maximum Performance, and Scalability

In this White Paper, Juergen Thomas states that a low queue depth default (32 in this case) can compound I/O issues from a SAP/SQL Server perspective.  He recommends a setting of 128 or even greater, with the caveat that one should check with the SAN vendor.


Source: Tuning SQL Server 2005 on HP Integrity Servers

This HP White Paper makes the suggestion that you should check that the maximum HBA queue depth is greater or equal to the number of spindles that the host is connecting to.  For example, 16 spindles = queue depth of 16. 


Source: Hitachi Adaptable Modular Storage 1000: Best Practices for Microsoft® SQL Server OLTP Applications

Eduardo Freitas and Reginal A. Hay acknowledge that SQL Server applications will generally benefit from higher HBA queue depth settings.  Specific to Hitachi Adaptable Modular Storage 1000, they use the following formula to calculate maximum queue depth settings per host/server: 512 ÷ (# of hosts x # of LUNs). 


That’s all for now…  The net takeaway of this blog post is that you should be cognizant of the HBA queue depth setting for servers hosting SQL Server, and that you must counter-balance concurrent I/O throughput through the HBA port versus saturation of the destination SAN. 

Comments (3)

  1. I have two blog subject follow-ups tonight: Follow-up #1… Regarding my " Page Cleaning with sp_clean_db_file_free_space

  2. babuganesh2000 says:

    ********** Final Summary for file E:sqliosim.mdx **********

    File Attributes: Compression = No, Encryption = No, Sparse = No

    Target IO Duration (ms) = 100, Running Average IO Duration (ms) = 15, Number of times IO throttled = 16111, IO request blocks = 30

    ********** Final Summary for file F:sqliosim.mdx **********

    File Attributes: Compression = No, Encryption = No, Sparse = No

    Target IO Duration (ms) = 100, Running Average IO Duration (ms) = 17, Number of times IO throttled = 3902, IO request blocks = 43

    What do I understand with the above output, is it good or bad or worse ? Please help me I am starting from the basics.



  3. Hi Ganesh,

    You are using SQLIOSIM, not SQLIO.  I would recommend you start off by using SQLIO if you are looking to test the performance of your disk subsystem.  SQLIOSIM doesn’t test performance, but rather it used to test I/O stability.

    Check out the following link for more of a description:


    Best Regards,