Although Database Logging can most certainly be valuable from a business perspective, it can be rather expensive with regard to resource use and management.
When logging is enabled for a particular transaction type, each instance of that transaction type causes multiple records to be written to the SQL Server transaction log file. Specifically, one record is written for the initial transaction – and another record logs the transaction written to the database log table.
Set-based database operations are downgraded to row-based operations for every table where database logging is enabled. All that time spent optimizing your code for database performance can therefore be ruined if logging is enabled for the table – so instead of a single round-trip bulk insert, update or delete, logging those operations will force then to be row-based.
Database logging can adversely affect long-running automated processes, such as inventory close, BOM calculations, master planning, and long-running data imports. Logging for example ledger or inventory related transactions – or any other “system generated” transactions is therefore not recommended and it adds little or no value.
The database log table can grow quickly and cause an increase in the size of the database. The growth depends to a degree on the amount of logged data that you decide to retain – but I know of examples where the log table was around one third of the total database size. There can of course be a business justification for this but unless it’s specifically needed, you might for example want to consider truncating this table when copying the database for your development and test systems.
When planning database logging, best practice would be to:
Create a plan for how long you will retain logged data, and how you will archive or delete data. Is it a necessity to keep database logs for years and years – or is it just a “nice to have”?
This does to a degree depend on who will review the database log entries and for what reason. You might for example want to keep all changes done to some module parameters, whereas log entries for inventory items that are no longer in use are probably of little interest.
By selecting specific fields to log instead of whole tables, it’s possible to limit log entries and improve performance at the same time. There’s no reason to log all changes done in a record if only a handful of fields in the table are of an importance. This is especially true if those fields are updated infrequently – but other fields might be. Such “high value” fields could for example be payment terms or credit limits for customers. Another example is if you log the entire InventTable, as the BomLevel field is updated each and every time you run BomCalc – causing a new log entry for each and every InventTable record included in BomCalc.
NOTE: Only updates can be logged for individual fields.
You should also consider increasing how often you perform SQL Server transaction log backups after you have configured database logging.
But be careful – as said earlier – DB logging can cause performance problems, so plan carefully and restrict any unnecessary logging.
Database logging is a means of tracking specific types of changes to Microsoft Dynamics AX tables and fields. Changes that can be tracked include insert, update, delete or rename key. When you configure logging for a table or field, a record of any change to that table or field is stored in the database log table, SYSDATABASELOG, in the Microsoft Dynamics AX database.
The business uses of database logging include the following:
• Creating an auditable record of changes to specific tables that contain sensitive information.
• Monitoring the use of electronic signatures.
• By default, all transactions that have been signed with electronic signatures are logged.
NOTE: Database logging is intended to track single transactions. It is not intended for tracking automated transactions running in batch jobs. Database logging is meant to track who, what, when – in other words, when did which user changed what data. It’s quite pointless tracking the changes done by system activity, such as posting processes.
- Avoid transactional tables like InventTrans, ledgertrans were you have high number of updates and inserts
- Avoid system tables like Syslastvalue or even databaselog itself.
- For Inserts, evaluate if you can use the table property CreatedBy, CreatedDateTime instead
- For updates, evaluate if you can track individual fields, rather than the full row.
Take the code example attached and setup the databaselog for the table “TestSysdatabaselog“
When running the jobs, we can see the clear difference in performance with and without the databaselog enabled
Then run the job “TestdatabaselogInsertWith“
- Insert 100000 records with databaselog
Run the job “TestDatabaselogInsertWithout“
- Insert 100000 records without databaselog
Run the job “TestDatabaseLogUpdateWith“
- Update 100000 records with databaselog
Run the job “TestdatabaselogUpdateWithout“
- Update 100000 records without databaselog