page contents

Top 10 tips to prepare your Dynamics AX 2012 Go Live – Series 2/3


Hi,

Let's now have a look at the second part of "the top 10 recommendations" list to prepare a Microsoft Dynamics AX 2012 Go Live.

4. Review long running queries from Dynamics AX users

This is my favourite tool when it comes to performance monitoring. Other tools will give you great insight for Expensive queries (DynamicsPerf) or resources contention (Perfmon). But this option allows you to map the SQL Statement and the X++ Call Stack.

It records long running SQL statements generated by the Microsoft Dynamics AX application. Information provided includes: SQL text, Duration, Call stack, AX user id.

The overhead of enabling this is directly proportional to the number of events that exceed the threshold since a record is written to the SYSTRACETABLESQL table each time that happens.  When the duration threshold is set to a reasonably high value (2000ms or greater), the overhead is very small and should not be noticeable to end users.

This is by default disabled for all users. To enable it, you can manually change each User Options under Database. Our recommendation is to leverage this query against [DynamicsPerf] to automatically update all users with the following query:

USE DynamicsPerf

EXEC SET_AX_SQLTRACE @DATABASE_NAME = 'MicrosoftDynamicsAX', @QUERY_TIME_LIMIT = 5000

You can also use the SQL Job [DYNPERF_Set_AX_User_Trace_on] and [DYNPERF_Set_AX_User_Trace_off] to enable/disable this option.

Tip: during Go Live you can directly read into the Dynamics AX database and see which statements are getting recorded. Of course, it is recommended to clean up the SYSTRACETABLESQL before:

Select CREATEDDATETIME, CREATEDBY, TRACETIME, STATEMENT, CALLSTACK from SYSTRACETABLESQL where TRACETIME > 0 order by TRACETIME desc

 

5. Detect all database locking events

Despite all the benchmark and Performance analysis you have done prior the Go Live, it is very likely much alike that you will face unexpected database locking in production. Such events are not always critical, but it is crucial to monitor them at runtime and collect the right amount of information in case situation is getting worse, for example with Deadlock and unresolved blocking statements.

Using the Performance Analyser for Microsoft Dynamics, also called DynamicsPerf, you can easily collect the SQL events from a light trace and you can also collect the whole chain of blocking events with another stored procedure.

DynamicsPerf Jobs for Locking

  • Start the light SQL Trace using the job [DYNPERF_Default_Trace_Start]. We recommend you to modify the Stored Procedure Dynamics Perf\SP_SQLTrace in line 190 to only keep the 3 events: (60) Lock Escalation, (137) Block Process Report and (148) Deadlock. You can also modify the trace path from the default location [C:\SQLTRACE]
  • Start the job [DYNPERF_Optional_Polling_for_Blocking]: this should be only run for the first few hours and you can look at the table BLOCKS of the database [DynamicsPerf] for details.

Tip: you can read directly into the SQL Trace to see all events at runtime using the following script.

SELECT F.StartTime, F.Duration,   F.EventClass, F.TextData, F.DatabaseName, F.HostName, F.ApplicationName FROM fn_trace_gettable('C:\SQLTRACE\DYNAMICS_DEFAULT.trc', DEFAULT) F, sys.trace_events E WHERE EventClass = trace_event_id order by StartTime desc

This will help you to visualize the number of events directly from the trace directly during the Go Live. Below is a chart representing the number of lock escalation events per hour during a couple of days. Having such data is really helpful to find potential root cause like change in infrastructure or new code promoted in production.

6. Set up Windows Error Reporting to create full dump files

When Dynamics AX AOS service is stopping with unexpected events like crash (event 1000 and 1001 in the Windows Event Viewer), it will create a user dump file with minimum information. We recommend you to change the registry key of every AOS so that full dump file is created at each AOS crash. That will allow Microsoft Support engineer to analyse the file and eventually find root cause.

Follow the configuration below for every AOS instance:

Registry Key = HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Windows Error Reporting\LocalDumps\Ax32Serv.exe

Under the registry key listed above the following values would be required to ensure that WER is configured to create the correct version of dump files:

  • Name: DumpFolder.
  • Type: REG_EXPAND_SZ.
  • Value: %LOCALAPPDATA%\CrashDumps (alternatively you can specify any local folder on the machine where you want to store the dump files)
  • Name: DumpCount.
  • Type: REG_DWORD.
  • Value: Select a decimal value. 3 should be fine and will ensure that the dump files will not flood your HDD
  • Name: DumpType.
  • Type: REG_DWORD.
  • Value: 0
  • Name: CustomDumpFlags
  • Type: REG_DWORD
  • Value: 7015 (decimal value, alternatively you can also set 0x1B67 hexadecimal)

 

Tip 1: you can manually create dump file from Windows Task Manager to get information on high memory consumption for example on when the service seems stuck on a specific task (like MRP).

 

Tip 2: you can also leverage the new Crash Analysis tool from Lifecycle Services to review the dump file.

You can now check the last post on this topic here

Regards,

@BertrandCaillet

Principal Premier Field Engineer

Comments (1)
  1. Alexandre says:

    The windows error reporting has changed with Windows 2012, I think some slight adjustment are necessary to apply on windows 2012 servers

Comments are closed.

Skip to main content