In-Memory OLTP in Standard and Express editions, with SQL Server 2016 SP1

We just announced the release of Service Pack 1 for SQL Server 2016. With SP1 we made a push to bring a consistent programming surface area across all editions of SQL Server. One of the outcomes is that In-Memory OLTP (aka Hekaton), the premier performance technology for transaction processing, data ingestion, data load, and transient data scenarios, is now available in SQL Server Standard Edition and Express Edition, as long as you have SQL Server 2016 SP1.

In this blog post we recap what the technology is. We then describe the resource/memory limitations in Express and Standard Edition. We go on to describe the scenarios for which you’d want to consider In-Memory OLTP. We conclude with a sample script illustrating the In-Memory OLTP objects, and some pointers to get started.

How does In-Memory OLTP work?

In-Memory OLTP can provide great performance gains, for the right workloads. One of our customers managed to achieve 1.2 Million requests per second with a single machine running SQL Server 2016, leveraging In-Memory OLTP.

Now, where does this performance gain come from? In essence, In-Memory OLTP improves performance of transaction processing by making data access and transaction execution more efficient, and by removing lock and latch contention between concurrently executing transactions: it is not fast because it is in-memory; it is fast because it is optimized around the data being in-memory. Data storage, access, and processing algorithms were redesigned from the ground up to take advantage of the latest enhancements in in-memory and high concurrency computing.

Now, just because data lives in-memory does not mean you lose it when there is a failure. By default, all transactions are fully durable, meaning that you have the same durability guarantees you get for any other table in SQL Server: as part of transaction commit, all changes are written to the transaction log on disk. If there is a failure at any time after the transaction commits, your data is there when the database comes back online. In addition, In-Memory OLTP works with all high availability and disaster recovery capabilities of SQL Server, like AlwaysOn, backup/restore, etc.

To leverage In-Memory OLTP in your database, you use one or more of the following types of objects:

  • Memory-optimized tables are used for storing user data. You declare a table to be memory-optimized at create time.
  • Non-durable tables are used for transient data, either for caching or for intermediate result set (replacing traditional temp tables). A non-durable table is a memory-optimized table that is declared with DURABILITY=SCHEMA_ONLY, meaning that changes to these tables do not incur any IO. This avoids consuming log IO resources for cases where durability is not a concern.
  • Memory-optimized table types are used for table-valued parameters (TVPs), as well as intermediate result sets in stored procedures. These can be used instead of traditional table types. Table variables and TVPs that are declared using a memory-optimized table type inherit the benefits of non-durable memory-optimized tables: efficient data access, and no IO.
  • Natively compiled T-SQL modules are used to further reduce the time taken for an individual transaction by reducing CPU cycles required to process the operations. You declare a Transact-SQL module to be natively compiled at create time. At this time, the following T-SQL modules can be natively compiled: stored procedures, triggers and scalar user-defined functions.

In-Memory OLTP is built into SQL Server, and starting SP1, you can use all these objects in any edition of SQL Server. And because these objects behave very similar to their traditional counterparts, you can often gain performance benefits while making only minimal changes to the database and the application. Plus, you can have both memory-optimized and traditional disk-based tables in the same database, and run queries across the two. You will find a Transact-SQL script showing an example for each of these types of objects towards the end of this post.

Memory quota in Express and Standard Editions

In-Memory OLTP includes memory-optimized tables, which are used for storing user data. These tables are required to fit in memory. Therefore, you need to ensure you have enough memory for the data stored in memory-optimized tables. In addition, both Standard Edition and Express Edition each database a quota for data stored in memory-optimized tables.

To estimate memory size required for your data, consult the topic Estimate Memory Requirements for Memory-Optimized Tables.

These are the per-database quotas for In-Memory OLTP for all SQL Server editions, with SQL Server 2016 SP1:

SQL Server 2016 SP1 Edition In-Memory OLTP quota (per DB)
Express 352MB
Web 16GB
Standard 32GB
Developer Unlimited
Enterprise Unlimited

The following items count towards the database quota:

  • Active user data rows in memory-optimized tables and table variables. Note that old row versions do not count toward the cap.
  • Indexes on memory-optimized tables.
  • Operational overhead of ALTER TABLE operations, which can be up to the full table size.

If an operation causes the database to hit the cap, the operation will fail with an out-of-quota error:

 Msg 41823, Level 16, State 171, Line 6
Could not perform the operation because the database has reached its quota for in-memory tables. See 'https://go.microsoft.com/fwlink/?LinkID=623028' for more information.

* Note: at the time of writing, this link points to an article about In-Memory OLTP in Azure SQL Database, which shares the same quota mechanism as SQL Server Express and Standard edition. We’ll update that article to discuss quotas in SQL Server as well.

If this happens, you will no longer be able to insert or update data, but you can still query the data. Mitigation is to delete data or upgrade to a higher edition. In the end, how much memory you need depends to a large extend how you use In-Memory OLTP. The next section has details about usage patterns, as well as some pointers to ways you can manage the in-memory footprint of your data.

You can monitor memory utilization through DMVs as well as Management Studio. Details are in the topic Monitor and Troubleshoot Memory Usage. Note that memory reported in these DMVs and reports can become slightly higher that the quota, since they include memory required for old row versions. Old row versions do count toward the overall memory utilization and you need to provision enough memory to handle those, but they do not count toward the quota in Express and Standard editions.

Usage scenarios for In-Memory OLTP

In-Memory OLTP is not a magic go-fast button, and is not suitable for all workloads. For example, memory-optimized tables will not really bring down your CPU utilization if most of the queries are performing aggregation over large ranges of data – Columnstore helps for that scenario.

Here is a list of scenarios and application patterns where we have seen customers be successful with In-Memory OLTP.

High-throughput and low-latency transaction processing

This is really the core scenario for which we built In-Memory OLTP: support large volumes of transactions, with consistent low latency for individual transactions.

Common workload scenarios are: trading of financial instruments, sports betting, mobile gaming, and ad delivery. Another common pattern we’ve seen is a “catalog” that is frequently read and/or updated. One example is where you have large files, each distributed over a number of nodes in a cluster, and you catalog the location of each shard of each file in a memory-optimized table.

Implementation considerations

Use memory-optimized tables for your core transaction tables, i.e., the tables with the most performance-critical transactions. Use natively compiled stored procedures to optimize execution of the logic associated with the business transaction. The more of the logic you can push down into stored procedures in the database, the more benefit you will see from In-Memory OLTP.

To get started in an existing application, use the transaction performance analysis report to identify the objects you want to migrate, and use the memory-optimization and native compilation advisors to help with migration.

Data ingestion, including IoT (Internet-of-Things)

In-Memory OLTP is really good at ingesting large volumes of data from many different sources at the same time. And it is often beneficial to ingest data into a SQL Server database compared with other destinations, because SQL makes running queries against the data really fast, and allows you to get real-time insights.

Common application patterns are: Ingesting sensor readings and events, to allow notification, as well as historical analysis. Managing batch updates, even from multiple sources, while minimizing the impact on the concurrent read workload.

Implementation considerations

Use a memory-optimized table for the data ingestion. If the ingestion consists mostly of inserts (rather than updates) and In-Memory OLTP storage footprint of the data is a concern, either

The following sample is a smart grid application that uses a temporal memory-optimized table, a memory-optimized table type, and a natively compiled stored procedure, to speed up data ingestion, while managing the In-Memory OLTP storage footprint of the sensor data: release and source code.

Caching and session state

The In-Memory OLTP technology makes SQL really attractive for maintaining session state (e.g., for an ASP.NET application) and for caching.

ASP.NET session state is a very successful use case for In-Memory OLTP. With SQL Server, one customer was about to achieve 1.2 Million requests per second. In the meantime, they have started using In-Memory OLTP for the caching needs of all mid-tier applications in the enterprise. Details: https://blogs.msdn.microsoft.com/sqlcat/2016/10/26/how-bwin-is-using-sql-server-2016-in-memory-oltp-to-achieve-unprecedented-performance-and-scale/

Implementation considerations

You can use non-durable memory-optimized tables as a simple key-value store by storing a BLOB in a varbinary(max) columns. Alternatively, you can implement a semi-structured cache with JSON support in SQL Server. Finally, you can create a full relational cache through non-durable tables with a full relational schema, including various data types and constraints.

Get started with memory-optimizing ASP.NET session state by leveraging the scripts published on GitHub to replace the objects created by the built-in session state provider.

Tempdb object replacement

Leverage non-durable tables and memory-optimized table types to replace your traditional tempdb-based #temp tables, table variables, and table-valued parameters.

Memory-optimized table variables and non-durable tables typically reduce CPU and completely remove log IO, when compared with traditional table variables and #temp table.

Case study illustrating benefits of memory-optimized table-valued parameters: https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/04/07/a-technical-case-study-high-speed-iot-data-ingestion-using-in-memory-oltp-in-azure/

Implementation considerations

To get started see: Improving temp table and table variable performance using memory optimization.

ETL (Extract Transform Load)

ETL workflows often include load of data into a staging table, transformations of the data, and load into the final tables.

Implementation considerations

Use non-durable memory-optimized tables for the data staging. They completely remove all IO, and make data access more efficient.

If you perform transformations on the staging table as part of the workflow, you can use natively compiled stored procedures to speed up these transformations. If you can do these transformations in parallel you get additional scaling benefits from the memory-optimization.

Getting started

Before you can start using In-Memory OLTP, you need to create a MEMORY_OPTIMIZED_DATA filegroup. In addition, we recommend to use database compatibility level 130, and set the database option MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT to ON.

You can use the script at the following location to create the filegroup in the default data folder, and set the recommended settings:
https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/features/in-memory/t-sql-scripts/enable-in-memory-oltp.sql

The following script illustrates In-Memory OLTP objects you can create in your database:

 -- configure recommended DB option
 ALTER DATABASE CURRENT SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
 GO
 -- memory-optimized table
 CREATE TABLE dbo.table1
 ( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,
   c2 NVARCHAR(MAX))
 WITH (MEMORY_OPTIMIZED=ON)
 GO
 -- non-durable table
 CREATE TABLE dbo.temp_table1
 ( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,
   c2 NVARCHAR(MAX))
 WITH (MEMORY_OPTIMIZED=ON,
       DURABILITY=SCHEMA_ONLY)
 GO
 -- memory-optimized table type
 CREATE TYPE dbo.tt_table1 AS TABLE
 ( c1 INT IDENTITY,
   c2 NVARCHAR(MAX),
   is_transient BIT NOT NULL DEFAULT (0),
   INDEX ix_c1 HASH (c1) WITH (BUCKET_COUNT=1024))
 WITH (MEMORY_OPTIMIZED=ON)
 GO
 -- natively compiled stored procedure
 CREATE PROCEDURE dbo.usp_ingest_table1
   @table1 dbo.tt_table1 READONLY
 WITH NATIVE_COMPILATION, SCHEMABINDING
 AS
 BEGIN ATOMIC
     WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT,
           LANGUAGE=N'us_english')
 
   DECLARE @i INT = 1
 
   WHILE @i > 0
   BEGIN
     INSERT dbo.table1
     SELECT c2
     FROM @table1
     WHERE c1 = @i AND is_transient=0
 
     IF @@ROWCOUNT > 0
       SET @i += 1
     ELSE
     BEGIN
       INSERT dbo.temp_table1
       SELECT c2
       FROM @table1
       WHERE c1 = @i AND is_transient=1
 
       IF @@ROWCOUNT > 0
         SET @i += 1
       ELSE
         SET @i = 0
     END
   END
 
 END
 GO
 -- sample execution of the proc
 DECLARE @table1 dbo.tt_table1
 INSERT @table1 (c2, is_transient) VALUES (N'sample durable', 0)
 INSERT @table1 (c2, is_transient) VALUES (N'sample non-durable', 1)
 EXECUTE dbo.usp_ingest_table1 @table1=@table1
 SELECT c1, c2 from dbo.table1
 SELECT c1, c2 from dbo.temp_table1
 GO

A perf demo using In-Memory OLTP can be found at: in-memory-oltp-perf-demo-v1.0.

Try In-Memory OLTP in SQL Server today!

Resources to get started: