UA-44032151-3 page contents

SQL Server Trace Flag 2371 for Dynamics AX


In my travels around the country from one Dynamics AX customer to another the customer is usually aware of how important database Statistics are to the overall performance of Dynamics AX.  The one thing I do not find though is Trace Flag 2371 enabled (or even know that it exists) or an in-depth understanding of how Microsoft SQL Server handles automatically updating those very important
database statistics.  Could it be Dark Magic that keeps these statistics updated?  In this BLOG article I will try to demystify this process and attempt to convince you to embrace Trace Flag 2371 and enable it in your Dynamics AX environment.

Auto-Update Statistics (SQL Server Database Setting)

In the database settings for each database hosted on SQL Server there is setting named “Auto Update Statistics” and it can either be set to “No” or “Yes”.  From a Dynamics/SQL Service Best Practices perspective this should always be set to “Yes” unless directed by a Microsoft support engineer.  When this is set to “Yes” it allows SQL Server to automatically update the existing statistics in the database. The more accurate the statistics the faster the response time of a query (in most cases) will be.  The question then becomes how does SQL Server decide when to “Auto-Update” a statistic and how does it work?  Currently SQL Server will “mark” a statistics to be “auto-updated” when 20% of the data in that particular statistic has changed.  Once it is “marked” it is actually updated the next time some sends a query to SQL that uses that statistic.  So matter how many rows are in that statistic SQL Server uses a static 20% threshold to determine when to “auto-update” a particular statistic.  This can cause issue with large tables that have 25, 50, 100 million rows, or more.  Just think if you have an INVENTTRANS table with a 100 million rows which means each statistic on that table has 100 million rows Auto-Update Statistics is not triggered until 20 million rows change, which means it could days, weeks, or even months in some cases for a statistic to trigger an “Auto-Update”.  This can cause significant performance issues as even 1% change in data can be statistically significant enough for SQL Server to alter an execution plan.  This means as the statistic gets older and more “stale” you run this risk of SQL Server using an index scan instead of an index seek which will degrade overall performance.  This is the reason why we always recommend that even with “Auto-Update Statistics” enabled that customers have a nightly SQL maintenance job that updates all statistics with “UPDATE STATITICS” or “SP_UPDATESTATS” to guarantee the statistics on large tables get updated at least daily.

In large high volume Dynamics AX deployments sometimes even daily updating of statistics on these large tables is not enough to keep up with the new and modified data and the statistics can become “stale” quickly.  There have been occasions in the past were very large volume customers have been forced to run “SP_UPDATESTATS” multiple times a day just to maintain consistent optimal performance where “stale” or out of date statistics were causing significant performance issues.  This is not ideal as we usually do not want this processing running in the background during peak production hours, the process updates stats on anything where data has changed so it does a lot of unnecessary work, rebuilding all statistics on any table that data has changed since the last update statistics running the risk of causing its own performance issues as all plans then need to be recompiled putting an unnecessary load on the CPU.  We really need is a more targeted approach.  A process that takes into account how many rows are in a table and uses that information to judge when “Auto-Update” of a statistic should be triggered and not a static 20%.  We also need a process
that does not rebuild statistics on smaller tables unnecessarily just because data has changed creating CPU overhead with unncessary recompiles.  And a process that would allow even the heavier volume customers to only run “SP_UPDATESTATS” once a day and then only for precautionary reasons.

 

SQL Server Trace Flag 2371  

Enter SQL Server Trace Flag 2371.  This Trace Flag was released with SQL Server 2008 R2 SP1 and all later versions.  There was a small update to this Trace Flag in SQL Server 2008 R2 SP2 and SQL Server 2012 SP1 that makes it more precise.  What Trace Flag 2371 does is that it changes the static 20% threshold currently used by the “Auto-Update” statistics process to a dynamic threshold based on the number of rows.  The larger the table the lower the threshold the more often the statistics are “Auto-Updated”.  This Trace Flag was essentially added expressly to SQL Server for ERP applications such as Dynamics AX that use cursors and rely heavily on singleton look-ups.  Given all this information we believe that there should not be any reason why Dynamics AX customers should not implement this Trace Flag given its obvious benefits on overall system performance and as it addresses one of the biggest performance concerns for AX when it comes to SQL Server.

Example

Image “stolen” from BLOG article

http://blogs.msdn.com/b/saponsqlserver/archive/2011/09/07/changes-to-automatic-update-statistics-in-sql-server-traceflag-2371.aspx

Comments (5)
  1. Jason Naccarato says:

    Yes, it does answer my question.  Thanks

  2. Excellent question as there are a handful of Dynamics AX customers who do receive some performance relief from Trace Flag 4136 (For many it causes as many issues as it solves).  With Trace Flag enabled SQL Server ignores the supplied parameter values for the SQL statement and uses the statistical average for that column in the Statistics Header for the index when it compiles the execution plan.  At this point the detail portion of the statistics, the Histogram is no longer used.  Trace Flag 2371 was really meant to fine tune through more periodic update the Histogram portion of the statistics.  With this being said Trace Flag 2371 will provide little to no benefit for a customer already utilizing Trace Flag 4136.  I hope that helps answer your question.

  3. Jason Naccarato says:

    We have SQL Trace Flag 4136 enabled in our environment as previously it was recommended due past performance issues.  I've been told database statistics are not important in our environment with Trace Flag 4136 enabled due to the default "parameter sniffing" behavior is changed so that a more average estimate of selectivity (based on density statistics) is used by the optimizer instead of the default parameter-specific estimates (based on the histogram). Is this the case or should we look enabling Trace Flag 2371?

  4. The overhead for re-computing a small number of statistics on the larger tables more frequently is far less overhead than a query that is executed frequently suddenly goes from a seek to scan causing a significant increase in duration opening up the potential for blocking.  Without writing out pages of detail SQL only holds 200 values (steps) for each statistic in the histogram no matter how many rows it actually has and even a small percentage of rows (but still a high number of actual row) can change this histogram which can change how the optimizer handles the query and could yield an inefficient plan.  If the actual overhead from the increased statistics re-computes is really causing an issue you need to develop a custom plan for handling statistics and this would somewhat of a fringe case and would require a custom approach.  It would be hard to provide an example as I would need to use a customer's database and we really can not do that and creating a demo database with the right size and record distribution would be a lengthy endeavor.    

  5. Denis Trunin says:

    Hi Michael

    Do not you think that setting this flag can leads to performance degradation? Updating statistics is very resource consuming operation. And statistics itself is just shown data distribution, which is not changed quite often( for example if you have 100 million InventTrans, when you update or insert 5 million records you data distribution remains the same if your company operates as usual)  

    I thinks it will be very useful that you provide some examples in which outdated less than 20% statistics affects query plans.

Comments are closed.

Skip to main content