Reduce locking and other needs when updating data – Better Performance


The following pattern typically stems from an old practice used in SQL 4.x and 6.x days, before IDENTITY was introduced.

begin tran
declare @iVal int

select @iVal = iVal from CounterTable (HOLDLOCK) where CounterName = 'CaseNumber'

update CounterTable
set iVal = @iVal + 1
where CounterName = 'CaseNumber'

commit tran

return @iVal

This can be a dangerous construct.  Assume that the query is cancelled (attention) right after the select.  SQL Server treats this as a batch termination and does not execute other statements.   The application now holds a lock under the open transaction and without proper handling it leads to blocking.

One Statement Fix

declare @iVal int

update CounterTable
set @iVal = iVal = iVal + 1
where CounterName = 'CaseNumber'

return @iVal

SQL Server allows you to update a value and return the updated value into a variable.   A single update statement can do all the work.   If the query is cancelled by the application the update is atomic and will complete or rollback but the application has much better control over the lock and transaction scope.

Use this to help reduce blocking and possible deadlock issues.   Since the lookup only takes place one time it can also increase performance of you application.

 

[OCT 28, 2009] - Update

One of the great things about the blog is that I get to interact with other bright individuals.   From this blog I received feedback from our SQL MVP community and they pointed out some things.

The goals of my original post were:

1.   Point out the issue with bad application design leaving transaction open and holding locks

2.   Point out that the locking behavior can result in unexpected deadlocks

3.   Point out a simple way to avoid the issue.

The scenario I presented is limited to areas where you have appropriate primary keys that prevent updated from occurring in a way you might not expect.  The following blog posts points out some of the pitfalls that things like HOLDLOCK and concurrent insert statements can encounter as well.

http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/06/30/855.aspx

This lead to the following discussion (reduced for the blog post).

----- SQL MVP Message-----

The sp_getapplock is a no brainer for me - its application "tables" independent so does not lock out read only queries which is what happens if you use UPDLOCK, HOLDLOCK is horrible and causes deadlocks, other methods require SERIALIZABLE which again leads to deadlocks. 

Orphaned transactions should never happen because it should be done in a succinct stored procedure, leaving the xact open would be a programming error.

[RDORR]  This is not always true, if the logic is wrapped in a stored procedure it still does not prevent the query timeout case.   The example below shows how you can still have a transaction open after the procedure is cancelled from the client and you don't get a warning about the transaction count being different on entry and exit of the procedure.


use

tempdb

go

select

@@TRANCOUNT

go

create

procedure spTest

as

begin

begin tran

waitfor delay '01:00:00'

end

go

exec

spTest -- Cancel execution of this with a query timeout (attention)

go

select

@@TRANCOUNT -- still == 1

NOTE:  You still need logic in the application (like all should have) to handle activity after a cancel request to reset the proper state.

Ok - the begin / try; yep - that would work; however - you will require more complex logic to do the re-try; if you have gone the UPDLOCK, HOLDLOCK methods and have used begin try then the transaction you are in can only be rolled back so you are a bit stuck there too.

[RDORR]  I am saying you do the insert and not the select.   If the insert succeeds (you have proper PK of course) then there was not a row.  If it fails then there was a row.   This only requires a single lookup in the table and you can handle the @@ERROR condition.

I had a recent client who had the orphaned transaction problem that caused massive blocking - they had no trapping in place, this one was SQL 2000 so I got them to write something around sysprocesses; in 2005 and 2008 you can use the BLOCKED_PROCESS_REPORT - you could write into the activation stored proc on the event notifications queue to pick that specific event type up and if caused by a specific statement/process then actually issue a KILL  - that make sense? It's application dependent though and I tend to go with the on BLOCKED_PROCESS_REPORT (raised because the threshold in sp_configure is set to 5 seconds) just email me and let me deal with it...

-----RDORR Message-----

Devil's Advocate:  Why would you not have a PK on the table and just try the insert.   Use Try/Catch to eat the PK violation so you don't need all the locking?    I assume this is a simple sample and you want to gather other data and it is not this simple?

What would you suggest to the user about the attention problem?  We see customers get into trouble that after they acquire the lock (say app lock in your scenario) an attention hits and they don't issue a rollback and such.   So they now lock up everything.

-----SQL MVP Message-----

It's not really a good method for doing this because you can still get repeated values (see my blog: http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/06/30/855.aspx) which uses sp_getapplock instead as an alternate approach.

The problem is that unless you literally have a straigh UPDATE and WHERE without any other table references then the SELECT side of the UPDATE is done before applying the locks to then do the update so the locking goes a bit mad.

I think the answer you need to state that you must have a unique constraint so that you are only ever updating one row and can guarantee that - also you need to do if @@rowcount > 1 then raiserror; also that you should not use sub queries and other table references....

 Bob Dorr - Principal SQL Server Escalation Engineer

Comments (0)

Skip to main content