There is insufficient system memory in resource pool ‘internal’ to run this query. Error: 701, Severity: 17, State: 123" on SSB Target Server – CACHESTORE_BROKERTO is consuming memory

I recently worked with a customer where the Target server for his Service broker application encounters “There is insufficient system memory in resource pool ‘internal’ to run this query. Error: 701, Severity: 17, State: 123

CACHESTORE_BROKERTO was consuming about 18 GB of RAM and SQL server is running out of memory.




SQL: Microsoft SQL Server 2012 – 11.0.5556.0 (X64)

OS: Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200 )

Max server memory is capped to 26GB out of 32 GB

SQL server is the Target server for a Service broker application.

This is also the Principal server for mirroring configured on the same database.


  • From the DBCC memory status we see



—————————————- ———–

VM Reserved                              841608

VM Committed                             32768

Locked Pages Allocated                   172488

SM Reserved                              0

SM Committed                             0

Pages Allocated                          1722088



CACHESTORE_BROKERTO (node 0)             KB

—————————————- ———–

VM Reserved                              0

VM Committed                             0

Locked Pages Allocated                   0

SM Reserved                              0

SM Committed                             0

Pages Allocated                          17685088  ===> Around 17.6 GB


  • We checked the count of sys.conversation_endpoints


Five million eight hundred ninety-five thousand nine hundred twenty-eight (almost 6 million)

We also checked the state_desc and found that we have all the conversations in the closed state.


The blog post by Rusanu shows how sys.conversations_endpoints grows out of control with CLOSED conversations that are never cleaned up & how to fix it



++Look at the following in our case as well

select  lifetime, state_desc, security_timestamp

                from [target].sys.conversation_endpoints;


lifetime                state_desc                                                   security_timestamp

———————– ———————————————————— ———————–

2080-10-16 13:25:49.790 CLOSED                                                       1900-01-01 00:00:00.000

2081-06-29 07:58:46.440 CLOSED                                                       1900-01-01 00:00:00.000

2081-09-18 10:08:05.960 CLOSED                                                       1900-01-01 00:00:00.000

2081-08-26 07:26:49.037 CLOSED                                                       1900-01-01 00:00:00.000

2082-04-23 10:22:49.263 CLOSED                                                       1900-01-01 00:00:00.000

2081-11-20 09:13:52.453 CLOSED                                                       1900-01-01 00:00:00.000

2082-12-03 15:23:07.873 CLOSED                                                       1900-01-01 00:00:00.000

2081-11-06 08:26:34.610 CLOSED                                                       1900-01-01 00:00:00.000

2081-08-05 07:52:59.677 CLOSED                                                       1900-01-01 00:00:00.000

2082-05-23 08:25:33.107 CLOSED                                                       1900-01-01 00:00:00.000



The target conversation endpoint is in CLOSED state, but the lifetime field indicates that the

This endpoint will be reclaimed on May 23rd 2082, because that is the conversation lifetime.

In case you wonder that date comes from adding MAX_INT32 (ie. 2147483647) seconds to the current date.


From an operational point of view this target endpoint is ‘leaked’. It will consume DB space and the system will refrain from deleting it for quite some time. Repeat this vicious exchange pattern several thousand times per hour and in a few days your target database will simply run out of space



In our case we do not see the Target database running out of space, however we are seeing memory pressure.


Service Broker Transmission Object Cache is taking up to 18GB of RAM even though the conversations are in the CLOSED state.



This could be because the Broker uses transmission objects (TO) to record the state of a

Conversation endpoint.


  • These are in-memory objects about 1KB in size, stored in the DB Engine address space
  • Once a TO is assigned to a conversation, it remains assigned to the conversation until the conversation ends.
  • So long as there is enough memory to hold all the TOs for all existing concurrent conversations, the TOs are all in memory and nothing is cached to tempdb.


  • So if a system has an application that is never ending conversations, eventually there will be more TOs than can be stored in memory and Broker will start caching them in tempdb



The Resolution is as per Rusanu’s blog . Even though the blog speaks target endpoint ‘leak’ resulting in the DB space consumption, the same solution can be used in our situation as well.

Note: The blog gives 2 detailed long term solutions for controlling the issue of closed conversations not being cleaned up

If you are already facing a situation in which the conversation endpoints have leaked then you first have to fix the application using one of the proposed solutions above (or both!). Then you can go ahead and cleanup the leaked conversations with the cleanup.sql script presented in Rusanu’s blog.



References/ Credits:




Comments (2)

  1. FLauffer says:

    Thank you! It helps me.

Skip to main content