page contents

What’s New in Beta 5


A big THANKS to the community that has been helping test DynamicsPerf 2.0 !!

Beta 5 has been posted here: DynamicsPerf 2.0 Beta 5

For anyone who has an older beta installed, it is highly recommended to upgrade to Beta 5.  The biggest issue that has been corrected in Beta 5 is the purge script.  Without installing Beta 5 you may find that your DynamicsPerf database keeps growing in size.

Fix List:

  • Purge script has been corrected.  Incorrect data was being inserted into the PURGETABLE.
  • Many of the sample queries in the Analyze project have been corrected or updated.
  • Added index to QUERY_PLANS_PARSED to improve performance of all QUERY views
  • Added SERVER_NAME column to PERF_HOURLY_WAITSTATS_VW so that you can select by server if collecting from multiple servers
  • Corrected SP_INDEX_CHANGES procedure, index added and deleted were backwards
  • Corrected DYNPERF_COLLECT_SQL_DATABASE_FILES procedure to properly display initial database size and other columns
  • Corrected DYNPERF_UPDATE_SSRS_HISTORY to update SSRS HISTORY table correctly for Default instances of SQL Server
  • Enhanced DYNPERF_COLLECT_SQL_TEXT and DYNPERF_COLLECT_QUERY_PLANS by passing PLAN_HANDLE as part of remote collection
  • Enhanced SERVER_ACTIVITY procedure adding Blocking data to better support current activity on remote servers
  • Changed synonyms in SET_AX_SQLTRACE to be unique to avoid a collision with one of the processing procedures
  • Replaced a temp table in SP_CAPTURESTATS and SP_PROCESS_STATS with work tables to avoid rapid create/delete of temp tables.
  • Added new procedure DYNPERF_COLLECT_AX_USERINFO to collect AX USER IDs for better correlation with CONNECTIONCONTEXT being enabled
  • Enhanced DYNPERF_PROCESS_QUERY_PLANS to not process QUERY_HASH 0x00000000 to improve performance
  • Enhanced DYNPERF_PROCESS_QUERY_PLANS to process the plans by TOTAL_ELPASED_TIME to process the plans that will be reviewed first.
  • Enhanced DYNPERF_PROCESS_QUERY_PLANS to set MAXDOP to 25% of the CPUs or a minimum of 2 to improve performance of the procedure without overwhelming the CPUs on SQL Server where DynamicsPerf is installed.
  • Enhanced DYNPERF_REFRESH_QUERY_PLANS and DYNPERF_REFRESH_QUERY_TEXT to use a unique temp table to avoid a blocking condition with the collection tasks
  • Updated AOTEXPORT xpo to better support multiple languages

 

Please download the Upgrade DynamicsPerf 2.0 script which contains all upgrade scripts for all previous betas.  Also, download the full Beta 5 to get the new installation scripts and the new sample scripts which have all been updated.

We need as many installs of Beta 5 as possible to find any remaining issues so that we can progress to a release candidate.  A reporting solution for DynamicsPerf 2.0 will be provided at a later date.

Thanks for all of your help !!

Rod “Hotrod” Hansen

Comments (13)
  1. Lau Larsen says:

    Having installed the Beta5 and everything seems to running OK (capturelog and jobs) I still find that no dynperf tables are populated???

    If I run the CaptureStat manually Things are captured.

    what am I missing?

    1. Hi Lau,

      The RTM version 2.0 of the tool DynamicsPerf is going under final testing and should be made available very soon. Please watch this blog for the announcement.

      Thank you for your understanding,
      @BertrandCaillet

  2. Mariano Gracia says:

    I have had to modify convertDateTime2Str function from AOTEXPORT2012_Direct class, it return a string with a date value doing:

    ret = DateTimeUtil::toFormattedStr(_dateTimeVal, 213, 2, 3, 2, 3, 4, 1, 1);

    but I'm working with date format 123 wich is defined by operating system, not in Dynamics AX, so I changed it to:

    ret = DateTimeUtil::toFormattedStr(_dateTimeVal, 123, 2, 3, 2, 3, 4, 1, 1);

    There are a lot of methods that catches CLR exceptions and print it, but the method "error(strFmt("%1: %2", funcName(), ex.ToString()));" throwns an error, it seems that the strFmt function doesn't works when it uses ex.ToString() as second parameter, so I have declared a string variable "warning", I initialize it before printing the error:

    catch (Exception::CLRError)
    {
    ex = CLRInterop::getLastException();

    if (ex != null)
    {
    ex = ex.get_InnerException();

    if (ex != null)
    {
    warning = ex.ToString();
    error(strFmt("%1: %2", funcName(), warning));
    //error(strFmt("%1: %2", funcName(), ex.ToString()));
    // error(ex.ToString());
    }
    }
    }

  3. Mariano Gracia says:

    I've opened an issue in the codeplex webpage, some of the stored procedures launches 242 ERROR (DYNPERF_COLLECT_QUERY_STATS, DYNPERF_COLLECT_SQL_TEXT, DYNPERF_COLLECT_SSRS_EXECUTIONLOG, DYNPERF_COLLECT_SQL_PLAN_GUIDES), the others works with no problems, is this error related with dateTime type?
    My SQL server is in modern_spanish_ci_as (dynamicsperf database and dynamicsAX database)

    1. Rod Hansen says:

      I posted on that issue. Can you get me the sql statement from Profiler.

      Thank You

      1. Mariano Gracia says:

        I found it, the problem is in the convert function with dateTime, in the stored procedure DYNPERF_COLLECT_QUERY_STATS it uses CONVERT(NVARCHAR(24), @LAST_RUN, 121) 121 is ODBC canonical (with milliseconds), I need to use 113 which is Europe default + milliseconds, but at the end I don't inform 3rd parameter in the convert function, I guess SQL decides which format to use. I have modified all the stored procedures and now it works fine.

  4. The INDEX_STATS_CURR_VW is awfully slow (>1h) due to an expensive index scan of the index_detail table caused by the group by. In my case, it contains 10 000 000 records. By changing the view a little bit it takes less than a second to do the same. In my case I use another cte for server & db info and for every record in the CTE I get TOP 1 from INDEX_DETAIL to get the last STATS_DATE. Then I use CROSS APPLY to get all the INDEX_DETAIL I need for the "max date".

    I

    USE [DynamicsPerf]
    GO

    /****** Object: View [dbo].[INDEX_STATS_CURR_VW] Script Date: 2016-04-14 23:47:03 ******/
    SET ANSI_NULLS ON
    GO

    SET QUOTED_IDENTIFIER ON
    GO

    /****** Object: Index [ix_INDEX_DETAIL_ServerName_DatabaseName] Script Date: 2016-04-15 01:06:42 ******/
    DROP INDEX [ix_INDEX_DETAIL_ServerName_DatabaseName] ON [dbo].[INDEX_DETAIL]
    GO

    /****** Object: Index [ix_INDEX_DETAIL_ServerName_DatabaseName] Script Date: 2016-04-15 01:06:42 ******/
    CREATE NONCLUSTERED INDEX [ix_INDEX_DETAIL_ServerName_DatabaseName] ON [dbo].[INDEX_DETAIL]
    (
    [SERVER_NAME] ASC,
    [DATABASE_NAME] ASC,
    [STATS_TIME] DESC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 95) ON [PRIMARY]
    GO

    ALTER VIEW [dbo].[INDEX_STATS_CURR_VW]
    AS

    WITH cteServer(SERVER_NAME, DATABASE_NAME)
    AS
    (
    SELECT LINKED_SERVER, DATABASE_NAME FROM [dbo].[DATABASES_2_COLLECT]
    ),
    MAX_STATS_CTE (SERVER_NAME, DATABASE_NAME, STATS_TIME)
    AS
    (
    -- USE OF TOP 1 INSTEAD OF GROUP BY TO AVOID AN EXPENSIVE INDEX SCAN
    SELECT TOP 1 SERVER_NAME,
    DATABASE_NAME,
    STATS_TIME
    FROM INDEX_DETAIL -- STATS_COLLECTION_SUMMARY
    ORDER BY SERVER_NAME ASC,
    DATABASE_NAME ASC,
    STATS_TIME DESC
    )
    SELECT D.SERVER_NAME,
    S.SQL_SERVER_STARTTIME,
    D.DATABASE_NAME,
    D.TABLE_NAME,
    D.INDEX_NAME,
    INDEX_DESCRIPTION,
    D.DATA_COMPRESSION,
    INDEX_KEYS,
    INCLUDED_COLUMNS,
    USER_SEEKS,
    USER_SCANS,
    USER_LOOKUPS,
    USER_UPDATES,
    RANGE_SCAN_COUNT,
    PAGE_COUNT,
    ROW_COUNT,
    SINGLETON_LOOKUP_COUNT,
    FORWARDED_FETCH_COUNT,
    INDEX_DEPTH,
    AVG_FRAGMENTATION_IN_PERCENT,
    FRAGMENT_COUNT,
    ROW_LOCK_WAIT_IN_MS,
    PAGE_LOCK_WAIT_IN_MS,
    INDEX_LOCK_PROMOTION_ATTEMPT_COUNT,
    INDEX_LOCK_PROMOTION_COUNT,
    PAGE_LATCH_WAIT_IN_MS,
    PAGE_IO_LATCH_WAIT_IN_MS,
    LEAF_INSERT_COUNT,
    LEAF_DELETE_COUNT,
    LEAF_UPDATE_COUNT,
    LEAF_GHOST_COUNT,
    NONLEAF_INSERT_COUNT,
    NONLEAF_DELETE_COUNT,
    NONLEAF_UPDATE_COUNT,
    LEAF_ALLOCATION_COUNT,
    NONLEAF_ALLOCATION_COUNT,
    LEAF_PAGE_MERGE_COUNT,
    NONLEAF_PAGE_MERGE_COUNT,
    LOB_FETCH_IN_PAGES,
    LOB_FETCH_IN_BYTES,
    LOB_ORPHAN_CREATE_COUNT,
    LOB_ORPHAN_INSERT_COUNT,
    ROW_OVERFLOW_FETCH_IN_PAGES,
    ROW_OVERFLOW_FETCH_IN_BYTES,
    COLUMN_VALUE_PUSH_OFF_ROW_COUNT,
    COLUMN_VALUE_PULL_IN_ROW_COUNT,
    ROW_LOCK_COUNT,
    ROW_LOCK_WAIT_COUNT,
    PAGE_LOCK_COUNT,
    PAGE_LOCK_WAIT_COUNT,
    PAGE_LATCH_WAIT_COUNT,
    PAGE_IO_LATCH_WAIT_COUNT,
    D.STATS_TIME,
    D.INDEX_ID
    FROM cteServer -- FOR EVERY SERVER & DATABASE
    CROSS APPLY(SELECT * FROM MAX_STATS_CTE CTE WITH (NOLOCK) WHERE cte.SERVER_NAME = cteServer.SERVER_NAME AND CTE.DATABASE_NAME = cteServer.DATABASE_NAME) AS CTE -- GET MAX OF STATS_DATE
    OUTER APPLY(SELECT id.* FROM INDEX_DETAIL iD WITH (NOLOCK)
    WHERE id.STATS_TIME = CTE .STATS_TIME
    AND ID.DATABASE_NAME = CTE.DATABASE_NAME
    AND ID.SERVER_NAME = CTE.SERVER_NAME) AS D -- AND GET ALL THE INDEX_DETAILS
    JOIN STATS_COLLECTION_SUMMARY S
    ON S.SERVER_NAME = D.SERVER_NAME
    AND S.STATS_TIME = D.STATS_TIME
    AND S.DATABASE_NAME = D.DATABASE_NAME
    LEFT JOIN INDEX_USAGE_STATS U WITH (NOLOCK)
    ON U.SERVER_NAME = D.SERVER_NAME
    AND U.STATS_TIME = D.STATS_TIME
    AND U.DATABASE_NAME = D.DATABASE_NAME
    AND U.OBJECT_ID = D.OBJECT_ID
    AND U.INDEX_ID = D.INDEX_ID
    LEFT JOIN INDEX_PHYSICAL_STATS P WITH (NOLOCK)
    ON D.SERVER_NAME = P.SERVER_NAME
    AND D.STATS_TIME = P.STATS_TIME
    AND D.DATABASE_NAME = P.DATABASE_NAME
    AND D.OBJECT_ID = P.OBJECT_ID
    AND D.INDEX_ID = P.INDEX_ID
    LEFT JOIN INDEX_OPERATIONAL_STATS O WITH (NOLOCK)
    ON D.SERVER_NAME = O.SERVER_NAME
    AND D.STATS_TIME = O.STATS_TIME
    AND D.DATABASE_NAME = O.DATABASE_NAME
    AND D.OBJECT_ID = O.OBJECT_ID
    AND D.INDEX_ID = O.INDEX_ID

    GO

    1. Rod Hansen says:

      Thanks for the update !! This is the type of feedback we need to keep us in great shape for the release.

  5. Robert B says:

    This is a great tool and is very useful, however, there is a bug in the Sample_sp_index_maintenance-Standard and Enterprise scripts in the first select statement. Both are joining on i.index_id = f.index_depth and it should be i.index_id = f.index_id

    1. Rod Hansen says:

      Thanks. Good Catch. I'll have to let Michael Devoe on my team know as those scripts are from him. You can read his blog post on index maintenance.

  6. Brad Fiscus says:

    We are having problems with the DYNPERF_PROCESS_AXSQLTRACE not finishing. It was finishing in 5 minutes or less for weeks, but now it ran for 2 days without finishing. Do you have any suggestions?

    1. Rod Hansen says:

      This should be fixed in the next release that I'm working on. Thanks for the feedback.

  7. Jon says:

    Thanks, interesting read.
    Is there any example reports that you could provide that are generated from Dynamics Perf as we are looking to use this for a client soon?

Comments are closed.

Skip to main content