Late night blog post to round out spring-break vacation…
A number of customers I’ve spoken to in the last few weeks have been making use of database or file shrink in situations where they don’t really need to. There are few recurring scenarios I’ve seen:
- Deleting a lot of data and then taking a backup – how to make the backup smaller without running shrink?
- Emptying a file before removing it
- Moving a file/filegroup/database to read-only
- Running ALTER INDEX … REBUILD and then running shrink to reclaim the space used for the rebuild.
The last scenario is particularly nasty because shrink effectively reverses the effect of an index rebuild or reorganize (see this post for more on the shrink algorithm).
So, with these scenarios demanding some way to reduce space usage, how can you do it without using shrink? The answer depends on whether your base tables are clustered indexes or heaps.
The trick here is to rebuild the clustered indexes into new filegroups, using the CREATE INDEX … WITH DROP_EXISTING statement. This does the same as a regular index rebuild, but you can alter the location of the index to a new filegroup (or to start using a partitioning scheme that defines a filegroup layout) and you can do it online (in Enterprise Edition).
If you’re moving the filegroup or database to be read-only, then you should set the FILLFACTOR to be 100% when you rebuild the indexes.
If you’re not changing the index schema, you don’t need to worry about the non-clustered indexes, as they will not be automatically rebuilt. You will want to manually rebuild them into the new filegroup(s) in the same, or just drop and recreate them (which could be tricky if they’re enforcing constraints).
Once you’ve moved everything out of the old filegroup, you can just remove it, and you’re done without having to execute shrink at all.
The first thing to preferably do is drop all the non-clustered indexes, or at least those that aren’t enforcing constraints that would be in danger of being violated while an enforcing non-clustered index was dropped. This is because when shrink moves a heap page, all related non-clustered index rows have to be updated; because heaps have physical row identifiers (see my previous post on shrink for more details).
Once you’re rid of the non-clustered indexes, you’re free to run shrink. Heaps are not ordered, so cannot become logically fragmented – so the problem of shrink causing fragmentation doesn’t affect heaps. Then you have two choices:
- Run shrink with EMPTYFILE into a new filegroup and then recreate the non-clustered indexes; or
- Shrink the existing filegroup with NOTRUNCATE (to create a bunch of contiguous free space in the database files), then recreate the non-clustered indexes in the contiguous space, and then, if you’re making the filegroup or database read-only, run shrink with TRUNCATEONLY (to give back extra space to the operating system)
With a little thought, shrink can be avoided. I know of several customers who have tried this approach successfully, and saved themselves lots of resource usage in the process.
Let me know how you get on.