Nitish Nagpal, Support Escalation Engineer, brings another amazing write up while troubleshooting performance issue in sql using profiler with sql team and this is what he wrote based on his experience, Read on!!
When to use SQL Trace
Deciding when to use SQL Trace depends on the specific goals and requirements for the environment that you’re looking at. A question that you should ask yourself is can the same objectives be obtained through dynamic management objects? If they can, then SQL Trace is probably isn’t the best way to go about collecting the information because a lot of information is already tracked by SQL Server in 2005 and onwards in the dynamic management
objects. The big thing would be tracing shouldn’t always be a first response to a performance problem. A lot of times, doing tracing is going to add additional overhead to the server and can increase the performance problems being experienced. Some examples of when SQL Trace might be appropriate will be in response to an error or alert to gather additional information for after the fact data analysis for performance tuning development code or during integration testing. And sometimes, proactive tracing can be used to
prevent having to wait to see a problem reoccur to find out what caused
SQL Profiler causes more performance load
Tracing live data using Profiler can create a significant overhead that will reduce the performance of SQL server. The reason for this is that events have to buffer their data into memory, and if the memory available for a trace gets full and profiler is not consuming the events fast enough, it causes events to have to wait inside of the server. This can result in lost events if the events aren’t consumed fast enough and the memory buffers have to be freed up to free up activity inside of the server.
Remember, it takes up to 20 seconds for the server to decide that it’s going to start freeing events inside of the buffers to free up tasks that are waiting inside of the server. Twenty seconds can be a significant overhead to performance if you have a busy server that’s generating a lot of activity.
When I troubleshoot slow-performing queries, I like to capture the following events:
- Showplan XML
Let’s look at each one to see what the event does, and why I have chosen to collect it for this particular trace.
The RPC: Completed event fires after a stored procedure is executed as a remote procedure call.
This event includes useful information about the execution of the stored procedure, including the Duration, CPU, Reads, Writes, together with the name of the stored procedure that ran. If a stored procedure is called by the Transact-SQL EXECUTE statement,
then this event will not fire. In addition, this event does not fire for Transact-SQL statements that occur outside a stored procedure.
The SP:StmtCompleted event tells us when a statement within a stored procedure has completed.
It also provides us the text (the Transact-SQL code) of the statement, along with the event’s Duration, CPU, Reads, and Writes. Keep in mind that a single stored procedure may contain a single, or many, individual statements. For example, if a stored
procedure executes five SELECT statements, then there will be five SP:StmtCompleted events for that stored procedure. This event does not fire for Transact-SQL statements that occur outside a stored procedure.
A SQL: BatchStarting event is fired whenever a new Transact-SQL batch begins. This can include a batch inside or outside a stored procedure.
I use this as a context event because it fires anytime a new stored procedure or Transact-SQL statement fires, allowing me to better see where a new chain of query-related events begins.
The SQL:BatchCompleted event occurs when a Transact-SQL statement completes, whether the Transact-SQL statement is inside or outside a stored procedure.
If the event is a stored procedure, SQL:BatchCompleted provides the name of the stored procedure (but not the actual Transact-SQL code), together with the Duration, CPU, Reads, and Writes of the statement. It may seem as if this event is somewhat redundant, in that it produces very similar results to the SP:StmtCompleted event. Unfortunately, you need to capture both events: SQL:BatchCompleted to provide Duration, CPU, Reads, and Writes event data when a Transact-SQL statement is run outside a stored procedure, and SP:StmtCompleted to see more easily what the code is inside a stored procedure.
This event displays the graphical execution plan of a query.
While ShowPlan XML is not required to identify slow running queries, it is critical to understanding why a query is performing poorly and it needs to be included with the trace.
This very useful data column provides the length of time in microseconds that an event takes from beginning to end, but what is curious is than when Duration is displayed from the Profiler GUI, it is shown, by default, in milliseconds. So internally, SQL Server stores
Duration data as microseconds, but displays it as milliseconds in Profiler. If you want, you can change this default behavior by going to Tools|Options in Profiler and select “Show values in Duration column in microseconds”.
I prefer to leave it to the default display of milliseconds, as it is easier to read.
Since our goal in this trace is to identify long-running queries, the Duration data column is central to our analysis.
Note that the Duration data column only exists for the RPC:Completed, SP:StmtCompleted and SQL:BatchCompleted events.
This is the logical name of the object being referenced during an event. If a stored procedure is being executed, then the name of the stored procedure is displayed. If an isolated query is being executed, then the message “Dynamic SQL” is inserted in this data column. This column therefore provides a quick means of telling whether or not a query is part of a stored procedure.
Note that this data column is not captured for the SQL:BatchStarting or the SQL:BatchCompleted events.
The contents of the TextData column depend on the event that is being captured and in what context.
For the SQL:BatchStarting or SQL:BatchCompleted events, the TextData column will either contain the name of the executing stored procedure or, for a query outside a stored procedure, the Transact-SQL code for that query.
If the event is SP:StmtCompleted, the TextData column contains the Transact-SQL code for the query executed within the stored procedure.
For the Showplan XML event, it includes the XML code used to create the graphical execution plan for the query.
This data column shows the amount of CPU time used by an event (in milliseconds). Obviously, the smaller this number, the fewer CPU resources were used for the query. Note that CPU data column is only captured for the RPC:Completed,
SP:StmtCompleted, and the SQL:BatchCompleted events.
This data column shows the number of logical page reads that occurred during an event. Again, the smaller this number, the fewer disk I/O resources were used for the event. Note that Reads are only captured for the RPC:Completed, SP:StmtCompleted and the SQL:BatchCompleted events.
This data column shows the number of physical writes that occurred during an event and provides an indication of the I/O resources that were used for an event. Again, Writes is only captured for the RPC:Completed, SP:StmtCompleted and the SQL:BatchCompleted events.
The value of this column depends on the event. For the SP:StmtCompleted event, the value is the actual number of rows that were returned for the event. For the ShowPlan XML event, it shows the estimated number of rows that were to be returned for the event, based on the query’s execution plan. The other events don’t use this data column. Knowing how many rows a query actually returns can help you determine how hard a query is working. If you know that a query returns a single row, then it should use a lot less resources than a query that returns 100,000 rows. If you notice that the actual number or rows returned is significantly different that the estimated value of rows returned, this may indicate that the statistics used by the Query Optimizer to create the
execution plan for the query is out of date. Out of date statistics can result in poorly performing queries and they should be updated for optimal query performance.
This is the name of the database the event occurred in. Often, you will want to filter on this data column so that you only capture those events that occur in a specific database.
This is the name of the client application that is communicating with SQL Server. Some applications populate this data column; others don’t. Assuming this data column is populated, you can use it to filter events on a particular application.
Virtually every event has a StartTime data column and, as you would expect, it includes the time the event started. Often, the start time of an event can be matched to other related events to identify their order of execution. It can also be compared to the stop time of events to determine the differences in time between when one event started and another completed.
The EndTime data column is used for those events that have a specific end time associated with them. It can be used to help identify when a particular query or transaction runs at a specific point in time.
This data column is mandatory for every event, and contains the number of the server process ID (SPID) that is assigned to the client process creating the event. It can help identify what connections are being used for an event, and can also be
used as a filter to limit the number of events returned to those of particular interest.
Most events include the LoginName data column. It stores the login of the user that triggered the event. Depending on the type of login used, this column can contain either the SQL Server login ID or the Windows Login ID (domain\username). This very useful
data column helps you identify who is causing potential problems. It is a good column to filter on, in order to limit trace results to those of a specific user.
Every event produced by SQL Server Profiler is assigned a sequence number that indicates the order that events occurred in SQL Server. Much of the time, you will capture traces in default order, which means the events are displayed in the order they occurred
in the Profiler GUI. EventSequence numbers appear in ascending order.
However, should you choose to apply a custom grouping, based on a certain data column such as Duration, then the EventSequence data column makes it easier for you to see which events occurred before, or after, other events.
This data column is only populated for the Showplan XML event and includes the estimated cost of a query, which is used for displaying the graphical execution plan of the query. This data is not human-readable, and is used by Profiler internally to help display
the graphical execution plan.
Written by: Nitish Nagpal, Support Escalation Engineer
Reviewed by: SQL Support team