TCP Offloading again?!

I have spent probably hundreds of hours on cases involving TCP Offloading and I know most of the signs (intermittent dropped connections, missing traffic in network traces).  However, I have to admit I got burned by it the other day and spent several more hours working an issue than I should have.

I was working on a server-down case for a financial trading company (in other words, large dollars involved every minute they were down) where the customer was experiencing slow connections to SQL Server.  The customer reported only some Linux ODBC clients were impacted.  Based on that description, we started looking at the client side.  However, we soon discovered that, while there was no detectable correlation between the clients, the problem was only visible going to a specific SQL Server instance.  The affected clients had no problem communicating with other instances of SQL Server.  Based on this, we started focusing on the SQL Server machine itself.

From the client application’s perspective, every query was taking roughly five seconds longer than expected.  Therefore, we collected a PSSDiag and looked at the performance of the SQL Server machine as a whole.  The Profiler traces showed that there was no delay inside SQL Server:


So, where were the five seconds coming from?

The next step was to look at a network trace:


Check out the two sets of timestamps circled.  Both of them had a five second delta!  Now we had physical proof of the problem, but we still don’t have a reason…

Then, I noticed something that turned out to be the key – the five second delay was always between the data sent from the client and the server’s response to that data.  That clinched the fact that this was a server-side is 100%.  I couldn’t explain yet why only some clients were impacted, but this was definitely a server-side issue.  The other interesting thing to notice above is that the delay is even visible on the login!  This was completely surprising because this customer was using SQL Authentication.  That is a highly optimized query which should never have performance issues.  This, combined with the fact that the subsequent query wasn’t showing up inside SQL Server as being delayed caused me to start thinking about things outside of SQL Server.

The next thing to check was for filter drivers that might have inserted themselves in the TCP stack – antivirus, firewall, NIC teaming, etc.  Unfortunately, nothing like this was installed so there were no clues there.  We also reconfirmed that TCP Chimney was turned off at the OS level. And then it hit me…NIC level TCP Offloading!!!

We pulled up the Ethernet adapter settings (Network Connections –> LAN XXX –> Properties –> Configure –> Advanced) and saw something that looked like this:


Lo and behold – TCP Offloading was enabled!

We disabled all of the Offloading settings, clicked OK and performance was back to normal.  Connections were fast and query results were returning right after SQL Server generated them.  I should point out that we didn’t take a stepwise approach here because this customer was losing large amounts of money every minute this system was down.  In a less critical issue, it would be worth doing each setting one at a time and testing in between.  In addition, I would also recommend that you go back after the fact and test enabling each setting to see if there is a negative impact.  There are some non-trivial performance benefits to be gained from these settings if everything is working properly.

We never did figure out why only some clients were impacted since all of the clients were using the same driver.  Nor where we able to figure out why only this SQL Server instance was impacted when several other SQL Server machines were configured the same way at the driver level.

The moral of the story?  I need to update my standard steps for capturing network traces to include NIC level TCP Offload settings!

As of this morning, my first four steps for capturing a network trace now look like this:

1a. Turn off TCP Chimney if any of the machines are Windows 2003
    Option 1) Bring up a command prompt and execute the following:
    Netsh int ip set chimney DISABLED
    Option 2) Apply the Scalable Networking Patch -;EN-US;936594

1b. Confirm that TCP Chimney is turned off if any of the machines are Windows 2008 (see for more details)
    a) bring up a command prompt and execute the following:
    netsh int tcp show global
    b) if it turns out TCP Chimney is on disable it
    netsh int tcp set global chimney=disabled

2.  Turn of TCP Offloading/Receive Side-Scaling/TCP Large Send Offload at the NIC driver level

3.  Retry your application.  Don't laugh - many, many problems are resolved by the above changes.

Evan Basalik | Senior Support Escalation Engineer | Microsoft SQL Server Escalation Services

Comments (32)
  1. Tony says:

    Maybe we should start a Facebook fan page of "TCP Offloading Sucks, So Disable it by Default."  How can we get some pressure on Microsoft and the NIC vendors to stop screwing their customers, other software vendors, and even itself with millions in support costs?  I work for a software co. that this causes problems for and it’s just a waste of human resources dealing with this crap.

  2. Evan Basalik says:

    From a Microsoft perspective, we have certainly learned from our mistakes in this area.  Windows 2008 ships with the feature off by default and Windows 2008 sets the default behavior based on the NIC speed (

    I cannot comment on the driver vendors, but I would hope they are listening to feedback both from customers and Microsoft.

    My general recommendation is to leave TCP Offloading off unless you find yourself stressing your server to the extent that the potentical increased networking performance is worth enabling it.

  3. Evan Basalik says:

    Oops – the comment above was supposed to say "…Windows 2008 ships with the feature off by default and Windows 2008 *R2* sets the default behavior based on the NIC speed…"

  4. Grumpy Old DBA says:

    could you confirm that " Turn of TCP Offloading/Receive Side-Scaling/TCP Large Send Offload at the NIC driver level " must be done at the card and/or does  " netsh int tcp show global "  show the status of this?

    The network guys are saying this is disabled but if I open the nic settings it shows the IPv4 Checksum and IPv4 Large Send Offloads as both being enabled. Sorry, configuring TCPIP isn’t one of my more usual skill sets as a DBA !

  5. IL says:

    Is any good reason to disable IPv4 Checksum Offload? This parameter is not mentioned in the article.

  6. Dean says:

    Why is it soooo hard to get this feature working correctly after all these years ? And if it can’t be made to work then why doesn’t every vendor just drop the idea ?

  7. rseiler says:

    On our 2003 R2 SP2, the NIC’s Advanced doesn’t even have TCP Checksum Offload, so I think we’re find, though I second the question about the one we do have, IPv4 Checksum Offload.

    "netsh int ip show offload" comes back only with this, which I think reflects that there is no TCP Checksum Offload.

    Offload Options for interface "Server Local Area Connection" with index: 10003:

  8. Evan Basalik says:

    You have to be careful – the netsh command only shows the OS state.  You need to check the card properties to see the state of the driver.

    I have not seen any issues with TCP Checksum Offload.

  9. HighPockets says:

    I have over 4000 servers to check for these properties being on.  Does anybody know how to query WMI to check for these TCP Offload settings?

    An automated way to change the settings?

  10. skeptic says:

    … We never did figure out why only some clients were impacted since all of the clients were using the same driver.  Nor where we able to figure out why only this SQL Server instance was impacted when several other SQL Server machines were configured the same way at the driver level …

    In my opinion you didn’t solve the problem. I used to hear a lot from PSS don’t use /3Gb with SQL Server 2000, as we see a lot of stability issues with the customers who use it. I NEVER saw a problem with /3Gb so I did recomend it to my customers.

    Looks like the line of advise that’s coming from SQL PSS is to switch off all advanced tuning parametsr and pray that this will solve the problem. Better off to turn the server that runs SQL off and that way you are not going to see any issues.

    On a serious note, test, test, test all possible scenarious within your environment, don’t just follow silly advise not to use some features becuase someone can have problems with it.

  11. psssql says:


    We would hate to see you alter settings on over 4000 servers.  In general I would say that if you are not seeing an issue with your servers, then you shouldn’t need to alter anything.  I think the point of this blog was if you do notice a Performance issue, it may be a result of the above.  

    It should be looked at on a case by case basis and not a blanket change to your environment.


    Adam W. Saxton

  12. Jeff Jordan says:

    Can you give more details on what to disable on the NIC?  As another poster had asked, should IPv4 Checksum Offload also be disabled?  In my case I've disabled the TCP Chimney in the OS and also disabled Large Send Offload v2 (IPv4), Large Send Offload v2(IPv6), Receive Side Scaling, TCP Checksum Offload (IPv4), , TCP Checksum Offload (IPv6).  Also what about UDP Checksum Offload?  Should that be disabled as well?



  13. says:

    Take another look at the failing clients for the applications they are running.  While my experience is on a much smaller business case, the application which was actually failing on a TCP connection was Eudora mail client, which is heavily multi-threaded – it's the combination of TCP Offloading and multi-threading which causes the NIC driver/hardware to fall on its face.  I don't pretend to understand the underlying mechanisms in detail but it appeared that switching threads requires the driver to switch its active TCP Offload connection in the NIC hardware and that switching process is prone to failure or excessive delay.

    The NIC in question here is a nVidia nForce4 chipset on-board NIC and I did do a stepwise check of the offloading settings: in one case it was Checksum Offload which had to be disabled and in another it was Segmentation Offload.  I turned both off on all the nForce4 equipped systems.

    Note that specific multi-threaded switching situations are nigh impossible to reproduce so it's difficult to be sure that any given system is tested and validated.

  14. says:

    I am curious.  Besides the previously mentioned Offload options, we have a TCP Connection Offload option for the integrated Broadcom NICs on our HP servers and I was wondering if this option should be disabled (Tested) with the previously recommended disabled Offload options?  

  15. Brandon says:

    Is there a way to script the disable of this TCP offload for Intel and Broadcom drivers? I want to add this to all my standard windows build scripts. I am sick and tired of running into this issue myself. Microsoft really needs to revisit this, it is terrible.

  16. chris says:

    You said: "the five second delay was always between the data sent from the client and the server’s response to that data"

    But that's not what the timestamps circled in red appear to show.  Both samples of 5 second delay between packets show a delay between one packet sent by the server and the next packet sent by the server.  The delay between client query and server response appears to be around 160ms (hard to tell because the red circles obscure the timestamps).

  17. RB says:

    Looks like this solution is designed for a network with late model network switches where both servers are windows with TCP offload enabled on the NIC and in the OS………/5709-WP101.pdf…Unfortunatley a windows server can also communicate to non windows servers or servers where the feature is not enabled.

  18. Neal says:

    Thank you for your advice here. Great article. We have 4 poweredge 2950's running with 100mb + each. We run a voice over IP company so we rely heavily on the performance of our NIC cards. The machines are all tied into Mysql servers. We experience NIC crashes at random intervals. We use the Broadcom 5708C NIC's. I cant say that I like any of the broadcom line of products. I have had allot of trouble with them. People keep telling me to go with the IntelPro 1000 dual gige cards. This is my last ditch effort in getting our NIC crashes resolved. If your advice fails I will take a gun to these broadcom cards.

    I went ahead and made the adjustments on the windows side (command prompt) and also on the NIC level. Broadcom has done something right an that's their management utility for their nics. Easy to use.

    I will update everyone on my progress. Btw I am running windows server 2008 R2 on all my servers.

  19. Neal says:

    I am happy to report so far so good.

  20. NetworkPro says:

    You changed something and clicked OK – that resets the driver and some things – that cause the fix in my opinion.

  21. Ramu_MSFT says:

    For the latest guidance on SNP, please check the following article:…/give-microsofts-scalable-networking-pack-140350


    (Microsoft SQL Server Support)

  22. Low Latency says:

    We are one of the fastest growing technologies to develop all latest appliances solutions Deep packet inspection, Ethernet 10gb and Low Latency.


  23. David says:

    Is disabling any of these disruptive? In other words does it reset the NIC and interfere with Network traffic from/to the server?

  24. Gts Bunty says:

    Intilop has several groups who can take on projects that range from a small 100k Gate FPGA design/integration to 10 M gate SOC Design/integration/Verification project or a small 2 inch X 2 inch board for an embedded design application to 22 inch X 26 inch, 24 layer multi-Giga bit blade server Board with multiple 1000+ pin BGA devices. <a href="">full tcp offload</a>

  25. intilop says:

    Intilop's innovative ideas make them the most respectable IP developer firm. This technology is especially designed for banks, financial institutions, data centers, cloud infrastructure, network equipment, defense/ aerospace platforms.


  26. Marvin says:

    Thanks,  I needed that.   I was looking at some of my Windows 2000 servers that don't have the TCP_CHIMNEY_OFFLOAD turned on, but as it turns out, they had it turned on at the NIC.   Great discovery

  27. Heartland Liberal says:

    I had run into this problem briefly, and applied changes to my Win 8.1 PC. But today I ran into such dramatic problems installing from files on my networked data server that I asked Google, found this article, and disabled TCP and UDP and checksum offloading on the NIC on my Win 2008 R2 data and archive server. Trying to copy a large folder for a Symantec install to a local drive was taking an hour. Seriously. After the change, I deleted from local drive, tried again, and the whole folder was copied in 15 seconds. Flat. I had been motivated to figure this out trying to run the Symantec install from the image on the server drive, and it had simply timed out, croaked, and locked up. I had to clean up the failed install. This time, it worked in a matter of minutes. I am making these changes on NIC on all my servers and machines today. Seriously, this has solved the growing bad performance problems I have been observing across the board.

  28. Jacques says:

    I too have observed these settings to cause issues in numerous instances, too many to count.  I do however have a question regarding Large Send Offload.  Has this been identified to have similar effect to TCP Offload?

  29. Thomas says:

    we get:

    "Data provider or other service returned an E_FAIL status. "

    in our software

    query it from Excel :

    "[Microsoft][ODBC SQL Server Driver][DBNETLIB] General Network error. Check your network documentation"

    (customer has mssql 2008 and terminalserver 2008r2)


  30. Naida Sahagun says:

    Nice ideas – I Appreciate the facts . Does anyone know where my assistant can obtain a fillable a form example to edit ?

  31. Sameira mubarak says:


Comments are closed.

Skip to main content