Conor vs. Recompiles, part 2

Thanks to those of you who replied to my previous post.  I spent the last week or so talking with customers, coworkers, and looking at various applications to see how recompiles are modeled today in their applications.  At a high level, SQL Server doesn’t really “require” you to recompile at all.  The main set of people who seemed to think about this were people having trouble with plan selection.  For example, if statistics were not updated frequently enough and users were querying new data on a time-series column, this could cause the automatic algorithm to not work well until statistics were updated on the queried column.  So, this is usually a more advanced topic for DBAs to consider. 

 

As you might expect, customers who do this usually manage a somewhat larger database or set of databases.  Sometimes it is an application where there are very precise performance requirements and the queries are relatively small (for example, a query that is usually a “seek” against a table instead of a scan).  When the query plan selected is a scan, the query performs much more slowly than the seek plan.

 

In terms of frequency of mechanism, this seemed to be all over the place – people use sp_recompile, freeproccache, and update statistics for different kinds of problems.  I think update statistics has an edge since it usually addresses the problems that can be fixed with a recompile, but others would do freeproccache or sp_recompile as well.

 

I will convey a small fact to you about sp_recompile that you may not know.  Internally, SQL Server has a timestamp that is used to determine whether plans in the procedure cache are still valid – as long as the timestamp on the referenced object is the same the current timestamp in the metadata for that object, the plan is “valid”.  sp_recompile works  by updating that metadata timestamp for the object passed as an argument to sp_recompile.  So, you are effectively making a metadata change by doing an sp_recompile.  DBCC FREEPROCCACHE will drop objects from the cache without a metadata change.  update statistics works on a different, secondary metadata version with slightly different semantics.

 

So, if I had to make a recommendation to you, I’d say to just consider whether you need to use sp_recompile or not – it is somewhat more heavyweight than the other approaches.  None of them are invalid, of course, but you might find that it is a bigger hammer than you need if you want to fix one query with a plan that is not what you want.  Note that DBCC FREEPROCCACHE takes arguments, and you can limit what is evicted so that you can reduce recompile load to the subset you desire.

 

Happy Querying (and Memorial Day, for those of you in the USA)!

 

Conor