Last week SAP released a set of new Support Packages for their 7.00 Basis. This is usually not worth reporting from a database point of view. However this time you’ll see some surprising changes after Basis Support Package 12 was applied – if you managed to download it using SAP Solution Manager Maintenance Optimizer (http://service.sap.com/solman-mopz). Basis Support Package 12 for SAP Basis 7.00 will introduce the new database monitor framework called DBACockpit. If you type in transaction code ST04 or DB02, you automatically will end up in the DBACockpit. I am sure we will report more on specific functionality of DBACockpit here, but for a first overview, it would be best if you would check out the documentation you can find here: https://www.sdn.sap.com/irj/sdn/mss
But this article should not be about the advantages and great features of DBACockpit. It should address more those customers not being able to enjoy DBACockpit because you are running an older SAP release on SQL Server 2005. DBACockpit functionality against SQL Server 2005 uses a lot of the new Dynamic Management views of SQL Server 2005. Older ST04 and DB02 functionality uses these DMVs only very restricted or not at all.
I’ll try to bridge these shortcomings with this article a bit by introducing some queries against these new DMVs. They are similar to those used by the SAP DBACockpit. As to the scripts itself, I did not develop all of them from scratch but borrowed some of them from our Best Practices Team (http://technet.microsoft.com/en-us/sql/bb331794.aspx – Thanks guys) and changed them for the purposes I needed. I did that while I was engaged in an onsite PoC (Proof of Concept) at a big deployment in Hong Kong. We actually tested the nightly batch cycle of SAP IS-U functionality.
Since the original collection of scripts is rather long, I decided to release these scripts in three parts on this Blog site.
Query Statistics Collection
A new DMV called sys.dm_exec_query_stats got introduced in SQL Server 2005. Its purpose is to display aggregated performance of query executions. Different data about the execution is tracked leading over elapsed time, worker time (CPU time spent), logical reads, physical reads, execution count, etc. Also a handle for every statement is shown. With this handle you can get the SQL Query Text. Another handle associated with every statement is the plan_handle which points to the query plan of that particular statement. The performance data is not persisted in any table and can fall victim of being pushed out of the cache sys.dm_exec_query_stats represents in the SQL Server Engine. However experience over nearly 3 years in the SAP space (counting the time Microsoft ran their large SAP R/3 instance on Beta versions of SQL Server 2005), showed that the data stays a rather long time, especially in 64Bit environments with plenty of memory for SQL Server. This also means that a restart of the SQL Server instance erases the data.
An important point to note is that between 4.6C and ERP2005 the way statements get issued by SAP changed in two steps. In 6.20 and 6.40 based kernels temporary stored procedures got banned and replaced by so called parameterized statements. In the 7.00 Basis static stored procedures also got eliminated and got replaced by parameterized statements as well. So once you are on a 7.00 based SAP product, there will be no need to execute sap_droproc anymore. However keep that in mind when you look at the statements themselves: Not having stored procedures anymore also avoids the possibility to manipulate a stored procedure with a query hint. Query hints definitely need to be applied in the ABAP and Java code now.
DBACockpit also uses sys.dm_exec_query_stats to display the aggregated query performance besides SAP’s own collection of query statistics on the application side, which is around since many years. In the ‘old’ ST04 monitor you could get to SAP’s collection via ‘Detail Analysis Menue’ à ‘SAP Stats on SPs’. In DBACockpit, you have the chances to define the top number of the entries (like Top 100) and you can sort according to different criteria. You can do this also with the query below or with modified versions of it. E.g. this query shows the 100 longest running statements which were executed more than 1,000 times:
select top 100 (total_elapsed_time/execution_count)/1000 as ‘Single Exec in ms’,
total_elapsed_time/total_worker_time as ‘Wait Ratio’, execution_count,
total_physical_reads/execution_count as ‘Avg physical read’,
total_logical_reads/execution_count as ‘Avg logical read’, sql_handle, plan_handle
where execution_count > 1000 order by total_elapsed_time/execution_count desc
Well, not entirely correct, it doesn’t show you the statement, it only comes along with the handle to the statement. You need to use the sql_handle to get to the statement text. To do so, you want to use the next query:
select * from sys.dm_exec_sql_text(<sql_handle>)
A sql_handle usually looks like: 0x0200000069AADA035DACBAD745A62B9D58663D6B315802B9
The result you would get for a parameterized stored procedure (4.6C Basis to 6.40 Basis systems only) could look like:
CREATE PROCEDURE Y3R61000068H29D1803PAKDATBUFi3o7ns @P000 nvarchar(2) = N’ ‘, @P001 nvarchar(22) = N’ ‘, @P002 int = 0 AS
SELECT “RELID” AS c ,”SRTFD” AS c ,”SRTF2″ AS c ,”CDATE” AS c ,”CTIME” AS c ,”CLUSTR” AS c ,”CLUSTD” AS c FROM “PAKDATBUF”
WHERE “RELID” = @P000 AND “SRTFD” = @P001 AND “SRTF2” >= @P002 ORDER BY “RELID” ,”SRTFD” ,”SRTF2″ /* R3:SAPLPA_PACKAGE_SERVICES:3324 T:PAKDATBUF */
Or in a case of a parameterized statement, the output could look like:
(@P1 nvarchar(6),@P2 nvarchar(20),@P3 nvarchar(6),@P4 nvarchar(20))SELECT “VBELN” ,”VDATU” FROM “VBAK”,
( SELECT k0 = @P1,k1 = @P2 UNION ALL SELECT @P3, @P4) as q WHERE “MANDT” = q.k0 AND “VBELN” = q.k1 /* R3:SAPLXCIF:1951 T:VBAK */
In this case you are looking at a statement which is rooted in an ABAP statement called ‘select for all entries’. Please note the leading parameter definitions.
In order to get the query plan of such a query, one can use the plan_handle (also delivered with the query above). However the query plan will be shown as an XML query plan:
select * from sys.dm_exec_query_plan(<plan_handle>)
The result of the statement above will give you a column which looks like a link.
Click on the link and a Query Window presenting an XML file opens. Store this XML file in a folder. ATTENTION: Save it with the extension of .sqlplan NOT .xml or .sql!
Open the <xxx>.sqlplan file with SQL Server Management Studio (SSMS) again. SSMS will now show the graphical queryplan
There is some additional info in the XML plan which can be used for investigating strange behavior
of parameterized queries. Towards the end of the XML file such a section can be found:
<ColumnReference Column=”@P16″ ParameterCompiledValue=”‘0′” />
<ColumnReference Column=”@P15″ ParameterCompiledValue=”‘ ‘” />
<ColumnReference Column=”@P14″ ParameterCompiledValue=”‘ ‘” />
<ColumnReference Column=”@P13″ ParameterCompiledValue=”‘ ‘” />
<ColumnReference Column=”@P12″ ParameterCompiledValue=”(0.00)” />
<ColumnReference Column=”@P11″ ParameterCompiledValue=”‘ ‘” />
<ColumnReference Column=”@P10″ ParameterCompiledValue=”‘00000000′” />
This is the information about the parameter values used to compile the query. In cases where long run times of queries are investigated and where the query seems to take a wrong query plan, this info can be helpful to get to the reason why Query Optimizer did decide for the plan. These values also can be taken to execute the query in a Query Window.
TIP: General performance analysis can be done most effectively if you are checking for
· Queries being executed very often and
· Also having a high average execution time.
Because of that we have added the requirement of the queries to be executed more than 1,000 times in Query 1 above. However you can adapt the filtering values as you like.
Another possibility DMV sys.dm_exec_query_stats offers is the possibility to find statements which access a certain object. You also can filter on eventual select, updates or inserts affecting one object only. The query could look like Query 2 below:
SELECT qs.execution_count as ‘Execution Count’,
(qs.total_elapsed_time/qs.execution_count)/1000 as ‘Avg. Exec in ms’,
(qs.total_logical_reads/qs.execution_count) as ‘Avg. logical Reads’,
(qs.total_physical_reads/qs.execution_count) as ‘Avg. physical Reads’,
(qs.total_elapsed_time/qs.total_worker_time) as ‘Wait Ratio’,
(case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end -qs.statement_start_offset)/2) as
query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid, qs.*
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt where
qt.text like ‘%SELECT%FROM%VBAK%‘ — define the query pattern to look for here
ORDER BY qs.execution_count DESC
Please adapt the term in RED and ITALIC above according to your needs. The case shown above displays some performance data on statements which select from table VBAK. However be aware that this query could run 1 minute or more! What the query basically does is: 1st take every entry which is in DMV sys.dm_exec_query_stats. 2nd executes ‘select text from sys.dm_exec_sql_text()’ for each of the handles and 3rd filters the result of the text of the select pattern you defined. Assuming that there are between 10K and 30K entries in sys.dm_exec_query_stats on today’s 64Bit server, the select can take some time to come back. Also note that we isolated the real query by the substring function in the statement. In this case, you will not be able to distinguish whether the statement was executed out of a parameterized stored procedure or by a parameterized statement.
In both statements the Wait Ratio (great idea Martin, thanks a lot) gives the ratio between Elapsed time and CPU time used for executing the queries. Means the lower the value, the less the execution experienced physical I/O, or latches or locks à This is what you want to see. On the other side, the higher the value, the more time was spent on waiting for resources like I/O, latches, locks or other resource bottlenecks à More investigation needed for queries with these results.
Another way to use DMV sys.dm_exec_query_stats is to find out the most read intensive queries. Query 3 offers the possibility of identifying queries which are the most intensive on logical or physical I/O. This can be worth to pursue in optimizing utilization of the SQL Server Buffer Pool.
SELECT TOP 100 (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count as [Avg IO], qs.execution_count, SUBSTRING(qt.text,(qs.statement_start_offset/2 +1),
(case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end -qs.statement_start_offset)/2) as query_text,
qt.dbid, dbname=db_name(qt.dbid), qt.objectid, qs.plan_handle
FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt
where qs.execution_count > 1000 ORDER BY [Avg IO] DESC
As you can see in the query we capped at the top 100 statements using the TOP operator of T-SQL.
Don’t get confused if the SAP application side list of queries and the performance data associated with it looks differently in several aspects. There are major differences like:
· SAP keeps most of the data around for the up time of the specific SAP application instance whereas SQL Server can flush data on a specific query when memory pressure situations occur
· SAP measures include the roundtrip times to the client
· DMV sys.dm_exec_query_stats does not keep queries around which SQL Server considers ending up in Zero-Cost plans. Means extreme simple queries where SQL Server will not even store a compiled query plan, but rather compiles the query with every repeated execution. Hardly happens with SAP application, but can happen.
This can result in lists which look different in SQL Server and on the SAP side. Both methods of looking at performance of SAP queries make sense at this point in time. We need to look into future SQL Server releases making our query statistics more detailed in order to be able to replace the measures on the SAP side.
I will update the blog pretty soon and will then share more queries to analyze the data stored in the new DMVs.
Thanks again to my wife Elke looking critically at my writing and correcting my writing in several places.