How to troubleshoot SQL Server performance issues with simple tools (Part 2: How to analyze the Perfmon trace and detect SQL Server performance issues)

We have already shown how we can detect IO bottlenecks inside a Perfmon trace. Now it is time to see how we can detect SQL Server issues inside the same trace. We will use these Objects/Counters:


– SQLServer:Access Methods: Forwarded Records/sec

These should be below 10% of the Batch Requests/Sec. If not, this means that INSERT operations are taking place on HEAP tables (tables that don’t have a clustered index). You can resolve this issue by creating a clustered index on these HEAPs.

– SQLServer:Access Methods: Full Scans/sec & Index Searches/sec

You should see one Full scan for each 1000 Index Searches, otherwise you are having many table/index scans that are usually very expensive. This is especially true if you also notice high CPU usage. To remedy this issue, review the existing indexes and add any missing indexes as necessary.

– SQLServer:Access Methods: Page Splits/sec

These should be below the 20% of Batch Requests/sec. If not, then you are probably using an inefficient Fillfactor for some of your indexes. If there are many INSERT operations on a table, the Fillfactor should be set lower. If there are many SELECT operation on a table, the Fillfactor should be set higher. To find a good value for your index you will need to do extensive testing.

–  SQL Server:Buffer Manager: Buffer Cache hit ratio

This should be above 90%. If not, then allocating more memory to the SQL Server process is a quick way to remedy this issue.

– SQL Server:Buffer Manager: Free pages

The number of Free pages should stay above 640 pages. If not, then this is an indication of memory pressure.

– SQL Server:Buffer Manager: Lazy Writes/Sec. These should be below 20. If not, this is an indication that you should allocate more memory to the SQL Server process.

– SQL Server:Buffer Manager: Page Life Expectancy. This should be above 300 seconds, otherwise the pages will be moved too often from disk to memory and vice versa. If not, then you should allocate more memory to the SQL Server process.

– SQLServer:Buffer Manager: Page lookups/sec. This amount of pages should be below (Batch Requests/sec * 100). If not, then inefficient plans are being generated and too many pages are getting looked up. Sometimes the cause behind this issue is
fragmented indexes and outdated statistics. In other occasions, the queries need tuning.

 – SQL Server:Buffer Manager Page reads/sec & Page writes/sec

Both these counters should stay below 90. If not, this is an indication of insufficient memory allocated to the SQL Server process.

 – SQL Server:Locks:Number of Deadlocks/sec

This should be 0. Otherwise you need to examine and resolve the deadlocks that are occurring.

– SQLServer:SQL Statistics: Batch Requests/Sec

There is no recommended value for this counter, because it depends on how busy your system is. From personal experience, any system that has more than 1000 Batch Requests/sec is a very busy system and should be treated accordingly (fast network, fast disks & multiple CPU cores).

 – SQLServer:SQL Statistics: SQL Compilations/sec

These should be below 10% of the Batch Requests/sec. If not, then there probably are ad-hoc queries being executed, which would be benefited if you re-wrote them as stored procedures to avoid the multiple Compilations.


So, now you know how to identify the root cause of a SQL Server performance issue and to check if there exist IO bottlenecks. Next week I will discuss how to collect useful data with aProfiler trace!

Comments (2)

  1. J Harris says:

    Great post John with very targeted details. Q: would these threshold numbers be the same for older versions of SQL? Or should they be relaxed somewhat?

  2. Mark says:

    Great article, check this great guide on identifying manually update-able stats: