Finding Large Transactions that Bloat Your Transaction Log


I know this is a fairly common problem and there might be multiple solutions out there, but I figured adding another one may not hurt.

An application I was supporting recently, was leading to 100s of GB of transaction log growth in spurts and we needed to discover what was causing the growth. I built the following process to help with the discovery:

1. Create a script titled FindLargeTransactions.sql and paste the following contents into it:

set nocount on
declare @datetime datetime
select @datetime = GETDATE()
select @datetime logtime, text, tr.database_id, tr.transaction_id, database_transaction_log_bytes_used, database_transaction_log_bytes_reserved,
database_transaction_log_record_count, database_transaction_state, database_transaction_status,
database_transaction_log_bytes_used_system, database_transaction_log_bytes_reserved_system
from sys.dm_tran_database_transactions  tr
inner join sys.dm_exec_requests r
on tr.transaction_id = r.transaction_id
cross apply sys.dm_exec_sql_text(sql_handle)
where database_transaction_log_bytes_used >  100*1024*1024  — 100 MB

2. Schedule a job that runs this SQLCMD.EXE every 1 minute for example. Make sure you choose an output file for the job step; this is where the contents of this script will be saved.

sqlcmd -SmyServer\sql2008r2 -E -ic:\temp\FindLargeTransactions.sql -W -w65535 – s”|” -h-1 -HFindLargeTransactions

3. Then, create a table and into which you can BULK INSERT the data for further analysis .

drop table [dbo].[LargeTransactions]
CREATE TABLE [dbo].[LargeTransactions](
      [logtime] datetime not null,
      [text] [nvarchar](max) NULL,
      [database_id] [int] NOT NULL,
      [transaction_id] [bigint] NOT NULL,
      [database_transaction_log_bytes_used] [bigint] NOT NULL,
      [database_transaction_log_bytes_reserved] [bigint] NOT NULL,
      [database_transaction_log_record_count] [bigint] NOT NULL,
      [database_transaction_state] [int] NOT NULL,
      [database_transaction_status] [int] NOT NULL,
      [database_transaction_log_bytes_used_system] [int] NOT NULL,
      [database_transaction_log_bytes_reserved_system] [int] NOT NULL

4. Identify the job output file and replace in the following script as the BULK INSERT source. Execute the BULK INSERT command

BULK INSERT dbo.LargeTransactions FROM ‘c:\temp\FindLargeTransactions_10_29_2012.out’

5. Query the table for “large” transactions, i.e. ones that impact the transaction log severely.

select database_transaction_log_bytes_used/1024/1024 LogMBUsed, Text
from LargeTransactions
order by 1 desc




Comments (4)

  1. Hiren says:

    What was the issue you found using this method?

  2. _Jo.Pi_ says:


    I found a DELETE statement that was deleting tens of millions of rows and causing over 250 GB of log growth in a single transaction.

  3. Anandan Kanagarajan says:


    Could you please explain the reason to use the option -HFindLargeTransactions

    sqlcmd -SmyServersql2008r2 -E -ic:tempFindLargeTransactions.sql -W -w65535 – s"|" -h-1 -HFindLargeTransactions

  4. _Jo.Pi_ says:


    -H option allows you to report a hostname of your choice, instead of the machine name of the computer SQLCMD is connected from. The reason for this "trick" is to allow you to identify that this is a script you created. If by chance you are doing performance troubleshooting and you notice a script has been running for a long time and you wonder what it is, you can look at the hostname column in sysprocesses view or host_name in sys.dm_exec_sessions DMV and know that this your job and ignore it for the rest of your troubleshooting.