Split a file group into multiple data files

Periodically we are asked how to split an existing filegroup into multiple data files. The quick answer is that SQL Server does not have a built-in way for you to do that automatically, but you can do it yourself. The process is relatively simple and I have provided a script that demonstrates one technique. The script provided is not designed for production and is only provided for illustrative purposes.

There have been many articles in the past that talk about using the ALTER INDEX …. REBUILD option to move objects from one filegroup to another and to “rebalance” that way. This author acknowledges the benefits of that technique, but sometimes the question of “rebalancing” is more driven out of simple “geometry” constraints. For example, if I have a database on a volume that I cannot grow, and I simply want to add new files to the filegroup – but have those files reside on a different volume. Adding the new files is quite simple, but by default, the existing file remains essentially full and there is an imbalance between the old and new files.

The technique proposed here will effectively rebalance and move the data out of the existing file across to the new files in such a way that the original file can be reduced in size and thus free up space on a volume that is filling up. It should be noted that while this technique “moves” data from the original file to the new files in the same filegroup, it does not guarantee that all objects residing in the filegroup are “balanced”. Some objects, depending upon their location in the original datafile, may have some data move, all data move or no data move. Ultimately the total amount of allocated pages will be balanced among the various files in the filegroup, but there could still be some hotspots for certain objects. This article and associated script does not attempt to deal with that issue.

High level process for splitting a filegroup into multiple files

  • Add several new empty data files to the file group
  • Cap the new files so they cannot auto grow
  • Empty the original data file
  • Readjust all files so they each have the same amount of free space / re-enable autogrowth
  • Shrink the original datafile to the same size of the new files

This process works well and can be done “online” – that is, the objects in the filegroup can be accessed during the splitting process. You should take into consideration that there could be a lot of I/O during this process. In addition to potential performance impacts, databases that participate in an AlwaysOn Availability Group, database mirroring or even log shipping can also be impacted due to the number of log records that are generated – all of which need to be shipped to the respective secondar(ies).

This diagram depicts the intended outcome – to take a filegroup with a single data file in it, and split it into multiple data files.

Step 1: Add new data files to the filegroup

The first step in splitting a filegroup into multiple data files is to add one or more new empty data files to the filegroup. In this example, the desired goal is for the original file in the filegroup to be 1/4th its original size and have a total of 4 files of equal size in the filegroup.

In order to do this, we need to add 3 new data files to the filegroup that are each 1/4th the size of the original data file.

--add (@numfiles-1) files to file group
SELECT @loopcntr = 2;
WHILE @loopcntr <= @numfiles BEGIN
	SELECT @NewLogicalName = @LogicalName + '_' + CAST(@loopcntr as varchar(5))
	SELECT @NewPhysicalName = REPLACE(@PhysicalName , '.mdf', '_' + CAST(@loopcntr as varchar(5))+'.ndf')
	SELECT @sql = 'ALTER DATABASE [' + DB_NAME() + '] ADD FILE ('+ @crlf	+
			'NAME = ' + @NewLogicalName + ',' + @crlf + 
			'FILENAME = ' + QUOTENAME(@NewPhysicalName, '''') + ',' + @crlf + 
			'SIZE = ' + CAST(@NewFSizeMB as VARCHAR(max)) + 'MB,' + @crlf +
			'MAXSIZE = ' + CAST(@NewFSizeMB as VARCHAR(max)) + 'MB,' + @crlf + 
			'FILEGROWTH = 0MB) TO FILEGROUP ' + QUOTENAME(@FileGroupName) +';' + @crlf + @crlf
	PRINT @sql
	exec (@sql)
	SELECT @loopcntr += 1

Step 2: Disable autogrowth on the new data files

The reason for this will become clear in the next step. In the sample script provided with this article, step 2 was actually done in combination with step 1 by setting the FILEGROWTH parameter to “0MB” in the ALTER DATABASE … ADD FILE command. (see above code segment).

Step 3: “Empty” the original data file

After the new files have been “capped” we are ready to “rebalance”. This is done by executing a DBCC SHRINKFILE command on the original data file with the EMPTYFILE option. This will take the data from the “end of the data file” and move it into the 3 newly added data files. Since each of those files have the same free space in them, the proportional fill algorithm will evenly distribute the data from the original file into the three new files.

The filegroup will go from this

To this:

--empty the original file -- which will move data into the new files
SELECT @sql = 'BEGIN TRY' + @crlf +
		'DBCC SHRINKFILE (' + @LogicalName + ', EMPTYFILE)' + @crlf +
		'END TRY' + @crlf + 
		'BEGIN CATCH' + @crlf + 
		'   IF ERROR_NUMBER() <> 2556 BEGIN' + @crlf + 
        '      SELECT ERROR_NUMBER(), ERROR_MESSAGE()' + @crlf + 
        '      RAISERROR (''Severe error moving data into new files.  MANUAL cleanup necessary.  Terminating connection...'', 19, 1) WITH LOG' + @crlf +
		'	END' + @crlf + 
		'END CATCH' + @crlf + @crlf
exec (@sql)

The reason we disabled autogrowth on the three new files is to prevent the original file from getting “too empty”. In this example, we want 4 files of equal size when we’re done. If we had not prevented the 3 new files from autogrowing, they would have kept growing until the first file was either empty or until all objects capable of moving had been moved. This would not have left us in a balanced state, but in a state that would have looked something more like this.

Step 4: Re-enable autogrowth and set the size to match for all datafiles

At this point we want to make sure that all the files are set to have the same maximum file size and autogrowth paramters. This is done so that if the files become full and need to autogrow, they will be set to grow at the same amount – thus leaving the same amount of free space in all the files.

--set all files to have a MAXSIZE and enable autogrowth
SELECT @loopcntr = 1;
WHILE @loopcntr <= @numfiles BEGIN
	SELECT @NewLogicalName = CASE @loopcntr WHEN 1 then @LogicalName ELSE @LogicalName + '_' + CAST(@loopcntr as varchar(5)) END
	SELECT @sql = 'ALTER DATABASE [' + DB_NAME() + '] MODIFY FILE ('+ @crlf	+
			'NAME = ' + @NewLogicalName + ',' + @crlf + 
			'MAXSIZE = ' + @maxsizeMBText + ',' + @crlf + 
			'FILEGROWTH = ' +  @maxgrowthMBText + ');' + @crlf + @crlf
	PRINT @sql
	exec (@sql)
	SELECT @loopcntr += 1

Step 5: “Shrink” the original data file to match the filesize of the other 3 new files

At this point we can issue another DBCC SHRINKFILE on the first file to shrink the file to be the same size as the other 3 files. The diagram below shows the final state at this point, 4 files of equal size in the filegroup.

--shrink the original file to match the new files size
SELECT @sql = 'BEGIN TRY' + @crlf +
'DBCC SHRINKFILE (' + @LogicalName + ', ' + CAST(@NewFSizeMB as varchar(max))+ ')' + @crlf +
'END TRY' + @crlf +
'BEGIN CATCH' + @crlf +
' IF ERROR_NUMBER() <> 2556 BEGIN' + @crlf +
' RAISERROR (''Severe error moving data into new files. MANUAL cleanup necessary. Terminating connection...'', 19, 1) WITH LOG' + @crlf +
' END' + @crlf +
'END CATCH' + @crlf
exec (@sql)

Script File

I have included here a .SQL text file containing the example script for this article.


For more information regarding how SQL Server deals with multiple files in a file group, please check some of these following references.

Comments (4)

  1. Tracey,
    that’s only half of the work.
    depending on the type of the data you’ve now a completely fragment table with extremely bad performance.

    1. I appreciate the feedback. You are absolutely correct. Fragmentation may very well be an issue. This article was intended to demonstrate one technique, not recommend it for all cases. There are advantages in moving objects to new filegroups — as a technique to “rebalance”, however that can get into supportability issues with vendors and there could be other issues as well. One of the driving factors for wanting to balance a filegroup across many files could be simply that with the current geometry, a limit may be reached — either because the volume will be full and can’t be expanded, or you could be reaching max file size for the file system. My intent was to give someone a new tool for their tool belt. Your response definitely provides useful information on the use of that tool. Thank you.

  2. Ken says:

    Great for small databases, nearly impossible within large and highly active databases.

    1. Hi Ken, thank you for your comment. I would agree activity is of great concern. Size? My biggest concern with the size would be stress on the disk subsystems. The slower the disks, and the larger the database, the more negative effect this approach could have with overall system performance especially as it concerns maintenance (especially index and backups). I’m not saying this technique will work for every scenario, but it will work with many — and as always “test, test, test”. Hopefully, as I’m sure you’ll agree, if the database is sufficiently “large” (a purely subjective and a moving target as time goes on) — we shouldn’t be in the position of having to split a single file filegroup to begin with. I appreciate and agree with your comment that before going implementing this technique, be sure to weigh the pros and cons of this method versus others of splitting the data (moving objects to other file groups, table partitioning, filetables, etc.) Thank you.

Skip to main content