Today, I thought I would spend a little time explain how to read a Dexterity Script Profile to resolve a performance issue. To demonstrate this I will use a support case I had where the customer was intermittent long delays when clicking on the checkbox on the Apply Sales Document window.
The customer in this case had made a decision not to run Paid Transaction Removal every month as they wished to continue displaying closed transactions on their statements for a longer period. This did cause the RM_OPEN (RM20101) table to have a large number of records, but the system should be able to handle this. There were no other activities which had noticeable performance problems and clicking the checkbox sometimes worked fast, but most of the time was slow.
When I say slow, I am talking up to 30 seconds of wait time after clicking the checkbox before control is returned to the user interface. From a user’s perspective that can be very frustrating, especially if they are applying many documents in a day.
So, we have a performance issue in the system and we know the one user action that makes it happen: Clicking the checkbox on the Apply Sales Document window. Now, how do we analyze what is happening when that user action occurs…..
The answer is logging.
We need to see what is happening behind the scenes, now it is possible to capture the logs manually using the steps in the Knowledge Base (KB) article I wrote (see below), but it is much simpler to use the Manual Logging Mode feature of the Support Debugging Tool to capture the logs with a single click. For your information here is the KB article:
Once the Support Debugging Tool was installed on the system, we asked the user to navigate through the application to the point just prior to that final user action (clicking the checkbox). Then using the Tools Menu on the current window, they selected Start Manual Logging (they can also press Ctrl-Shift-F9). Logging is now active. Then they performed that user action (clicked the checkbox) and waiting for control to come back to the user interface. Finally, they selected Stop Logging (Ctrl-Shift-F10) from the Tools Menu. That’s it…. the logs have been captured.
The Support Debugging Tool, not only captures the common DEXSQL.LOG (which is the communication between the Dexterity Runtime and the SQL Server), it also captures the Dexterity Script Log (which shows all the Dexterity scripts executed with their parameters and hierarchy) and the Dexterity Script Profile (which shows all the scripts executed with the number of times called and how long the scripts to run as well as information on tables accessed and times). The Support Debugging Tool also names the files with date & time stamps and also the current user and company. Having all three of these logs makes quite detailed analysis possible.
Below is my analysis of the logs generated by the Support Debugging Tool and provided to me by the partner. The files provided are:
- Debugger_sa_LIBL.log (Support Debugging Tool Log)
- Profile_sa_LIBL_20110210_110517.txt (Dexterity Script Profile)
- Script_sa_LIBL_20110210_110517.log (Dexterity Script Log)
- DEXSQL_20110210_110517.LOG (DEXSQL.LOG)
Note: These files will be also attached as an archive file to the bottom of this article.
Looking at the Dexterity Script Profile Profile_sa_LIBL_20110210_110517.txt file (excerpt below), we can see the slow performance when clicking on the Apply Select checkbox.
Note: What you are looking for is a line with a high value for Time in which does not get much larger when you include called scripts (as shown in the +Children column). You also need to take into account the Count column as it is possible to have a large time in a script due to it being called many times.
The first line shows that a total of 47.26 seconds is used for the script and its child scripts to complete, however only 0.03 seconds was actually spent in this first script. It is the second line, the IsSchInv() of form RM_Cash_Apply function that has the majority 47.23 seconds in it and does not gain time for called scripts. This identifies the this script as the location of the time taken.
In the same Dexterity Script Profile, the bottom section has Database Statistics (excerpt below) for the various tables accessed.
Start by looking at the last section which contains total times. We can see that the RM_OPEN table access has 47.24 seconds used when performing Read First accesses. Looking at the first section, we can see that there are 2 Read First accesses recorded. Finally, looking in the middle section we can see that the average time per operation is 23,623ms for each of the Read First accesses, which added up to the 47 seconds.
Looking at the Dexterity Script Log Script_sa_LIBL_201102110_110517.log file, we can see that the IsSchInv() of form RM_Cash_Apply function is called by the Apply Select checkbox field change script. We could have guessed that from the times seen in the Script Profile, but this is a good confirmation.
Dexterity Script Log Excerpt
So we know that the problem relates to the RM_OPEN table and the IsSchInv() of form RM_Cash_Apply function, but what is actually happening.
If we search the DEXSQL_20110210_110517.LOG file for the RM_OPEN (RM20101) table we can file the following entries. Look at the time of the first entry and the second entry in the excerpt below: 11:06.09 to 11:06:58. While there is no guarantee that the actions in the code directly after the first line was issued were to access SQL server again, that does show a long time between communications to SQL Server.
The next step is not available to everyone. If you have access to the source code for Microsoft Dynamics GP, you can then look up the actual Dexterity code in question and see what it is doing.
So looking at the IsSchInv() of form RM_Cash_Apply function I found the use of a range where statement followed by a get first statement. The where clause provided matched the where clause in the DEXSQL.LOG (shown above).
Form Function IsSchInv() of form RM_Cash_Apply Excerpt
So, we now have the exact line that is causing our performance issue. But we are still left with the big question…. WHY?
This bit gets a little technical and relates to how SQL Server accesses data.
I looked at the defined indexes on the RM_OPEN (RM20101) table and there are no indexes based on Document Number only or with Document Number as the first segment in the index. This means that the server will need to scan the entire table to find the record as it is unable to locate a usable index. Sometimes depending on the order of the data in the table, SQL might find what it is looking for quickly if it is at the beginning of the scan or it has been cached. This explains why the performance was sometimes faster than other times.
To improve on this situation we can add a new index at the SQL level. It is also worth re-indexing the existing indexes and updating the internal statistics kept by SQL Server. For best performance, the Index added should contain all fields used in the where clause AND all fields in the order by clause.
Looking at SQL Server execution plan for the query (from the DEXSQL.LOG) before and after adding the index it was easy to see than the index resolved the issue. The customer has also confirmed no performance issues since the index was applied.
PS: This issue has been logged as a problem report (62237), so that the Dexterity code can be changed to avoid the performance issue without needing an additional index added. Please log a support case and reference this bug number if you have experienced this issue.