TempDB:: Table variable vs local temporary table

As you know the tempdb is used by user applications and SQL Server alike to store transient results needed to process the workload. The objects created by users and user applications are called ‘user objects’ while the objects created by SQL Server engine as part of executing/processing the workload are called ‘internal objects’.In this blog, I will focus on the user objects commonly referred to as temporary tables (#, ##) and table variables. While the differences between ##table (global temporary table) and #table (local temporary table) are well understood, there is a fair amount of confusion between #table and table variable.  Let me walk through main differences between these.

A table variable, like any other variable, is a very useful programming construct. The scoping rules of the table variable are similar to any other programming variables. For example, if you define a variable inside a stored procedure, it can’t be accessed outside the stored procedure. Incidentally, #table is very similar. So why did we create table variables? Well, a table variable can be very powerful when used with stored procedures to pass it as input/output parameters (new functionality available starting with SQL Server 2008) or to store the result of a table valued function. Here are some similartities and differences between the two:

·         First, the table variable is NOT necessarily memory resident. Under memory pressure, the pages belonging to a table variable can be pushed out to tempdb. Here is one example shows space taken by table variable in tempdb

use tempdb



drop table #tv_source



create table #tv_source(c1 int, c2 char(8000))



declare @i int

select @i = 0

while (@i < 1000)


       insert into #tv_source values (@i, replicate (‘a’, 100))

       select @i = @i + 1



DECLARE @tv_target TABLE (c11 int, c22 char(8000))



 INSERT INTO @tv_target (c11, c22)

    SELECT c1, c2

    FROM  #tv_source


— checking the size through DMV.

— The sizes here are in 8k pages. This shows the allocated space

— to user objects to be 2000 pages (1000 pages for #tv_source and

— 1000 pages for @tv_target


Select total_size = SUM (unallocated_extent_page_count) +

SUM (user_object_reserved_page_count) +

SUM (internal_object_reserved_page_count) +

SUM (version_store_reserved_page_count) +

SUM (mixed_extent_page_count),


SUM (unallocated_extent_page_count) as freespace_pgs,

SUM (user_object_reserved_page_count) as user_obj_pgs,

SUM (internal_object_reserved_page_count) as internal_obj_pgs,

SUM (version_store_reserved_page_count)  as version_store_pgs,

SUM (mixed_extent_page_count) as mixed_extent_pgs

from sys.dm_db_file_space_usage

·         Second, when you create a table variable, it is like a regular DDL operation and its metadata is stored in system catalog. Here is one example to check this

declare @ttt TABLE(c111 int, c222 int)

select name from sys.columns where object_id > 100 and name like ‘c%’

This will return two rows containing columns c111 and c222. Now this means that if you were encountering DDL contention, you cannot address it by changing a #table to table variable.

·         Third, transactional and locking semantics. Table variables don’t participate in transactions or locking. Here is one example

— create a source table

create table tv_source(c1 int, c2 char(100))



declare @i int

select @i = 0

while (@i < 100)


       insert into tv_source values (@i, replicate (‘a’, 100))

       select @i = @i + 1


— using #table

create table #tv_target (c11 int, c22 char(100))





    INSERT INTO #tv_target (c11, c22)

            SELECT c1, c2

            FROM  tv_source



— using table variable


DECLARE @tv_target TABLE (c11 int, c22 char(100))