How can I find out how many threads are active in the CLR thread pool?


A customer was looking for a way to determine programmatically how many threads are active in the CLR thread pool.

There is no method that returns this information directly, but you can snap two blocks together

int max, max2;
ThreadPool.GetMaxThreads(out max, out max2);
int available, available2;
ThreadPool.GetAvailableThreads(out available, out available2);
int running = max - available;

But even though we answered the question, we don't know what the customer's problem is. The customer was kind enough to explain:

We have an issue where we exhaust the thread pool, causing our latency to skyrocket. We are investigating possible mitigations, and knowing when we are close to saturating the thread pool would tell us when we need to take more drastic measures. The thread pool threads are not CPU-bound; they are blocked on SQL queries. We have a long-term plan to use async/await, but that is a large change to our code base that will take time to implement, so we're looking for short-term mitigations to buy ourselves some time.

A colleague pointed out that if your thread pool threads are all blocked on SQL queries against the same server, then adding more threads won't help because the bottleneck is not the thread pool. The bottleneck is the SQL server. Any new thread pool threads you add will eventually block on SQL queries to the same unresponsive server.

Now, if your workload consists entirely of work items that access the database, then the database is your bottleneck, and there's not much you can do on the client to make it go faster. But if your workload is a mix of work items that access the database and work items that don't access the database, then you at least don't want the non-database work items to be blocked behind database work items.

If this were a Win32 application, you could create a second thread pool and queue database work items to that thread pool. Non-database work items go to the default thread pool. When the second thread pool runs out of threads, it stalls the processing of other database work items, but the non-database work items are not affected because they are running on a different thread pool.

But the CLR doesn't let you create a second thread pool, so your database work items and non-database work items have to learn to live in harmony.

Rewriting the code to be "async all the way down" may not be practical in the short term, but you could make it async at the top. Suppose your database work item looks like this:

ThreadPool.QueueUserWorkItem(() =>
{
  DoDatabaseStuff(x, y, z);
  MoreDatabaseStuff(1, 2, 3);
});

Add a single async at the top:

ThreadPool.QueueUserWorkItem(async () =>
{
  using (await AccessToken.AcquireAsync()) {
    DoDatabaseStuff(x, y, z);
    MoreDatabaseStuff(1, 2, 3);
  }
});

The purpose of the Access­Token class is to control how many threads are doing database stuff. We put it in a using so that it will be Disposed when control exits the block. This ensures that we don't leak tokens.

Since the Acquire­Async method is async, it means that work items do not consume a thread while they are waiting for a token. By controlling the number of tokens, you can control how many thread pool threads are doing database work. In particular, you can make sure that database work items don't monopolize the thread pool threads, leaving enough thread pool threads for your non-database work items.

¹ Maoni Stephens pointed out that there's also a managed debugging library called ClrMD which gives you a lot of information about the thread pool. You may want to start with the ClrThread class.

Comments (18)
  1. David Haim says:

    I think the first thing people should learn when they are introduced to multi-threading programing is that asynchronous IO != IO on another thread.
    so many problems could have been solved before they even came to being.

  2. Rising says:

    Raymond, will you be showing us the code for the AccessToken class?

    1. I leave that as a simple exercise.

    2. Voo says:

      The implementation is basically a wrapper around a SemaphoreSlim with the AcquireAsync returning preferably a struct that implements IDisposable.

      1. Marco says:

        As interfaces usually cause the struct values to be boxed, it is worth noting that the IDisposable.Dispose() is a special case in Microsoft’s implementation.

        “A call to IDisposable.Dispose on a struct is generated as a constrained virtual call, which most of the time does NOT box the value. A constrained virtual call on a value type only boxes the value if the virtual method is NOT implemented by the type. The only circumstances under which a virtual method can be unimplemented by the value type is when the method is, say, ToString, and implemented by the base class, System.ValueType.”

        See: https://stackoverflow.com/questions/2412981/if-my-struct-implements-idisposable-will-it-be-boxed-when-used-in-a-using-statem
        See also: https://ericlippert.com/2011/03/14/to-box-or-not-to-box/

    3. Ray Koopa says:

      Isn’t it in referencesource anyway?

  3. Brian says:

    There are third-party managed thread-pool classes floating around on the internet. We used one when we had radically different response requirements for a particular set of activities. We left the out-of-box thread pool to do regular work, but we dispatched our “special” work to the other pool.

    1. Joshua Schaeffer says:

      I had to create a quick-and-dirty thread pool because the .NET thread pool was throttling new creation at one thread per half second while I was stuck calling blocking I/O that someone else wrote. The whole program start would lock up and I’d look like a lazy idiot. That thing really is a stinking turd worthy of contempt for the embarrassment it caused me. .NET Core replaced the guts with WinRT but you can’t create new thread pools with WinRT. You have to go custom or you have to P/Invoke the Vista thread pool. It does work now, really well.

      1. poizan42 says:

        Why are you abusing the thread pool in the first place? The whole point of it is to execute short tasks. If you need to keep spawning new threads then spawn a new thread instead of abusing the thread pool.

      2. Joshua says:

        Why are P/Invoking to make a thread pool? Thread.Create() yours to do with what you please.

  4. 640k says:

    I assume sql server isn’t single treaded an can execute multiple, long-running, queries in parallell. A server (vm or physical) usually has s lot of cores, sometimes it might even be hard to saturate all with meaningful work load. Multiple parallell calls from a cliant might actually increase performance, as long as the sql server isn’t overloaded.

    1. AndyCadley says:

      Raymond’s suggestion of “async at the top” can probably help prevent saturation from the top level, but I’d suspect query tuning could probably help fix it from the bottom too.

    2. Damien says:

      If all of the queries are similar/identical, you might just be adding more lock contention by parallelizing the activity, rather than achieving performance gains.

  5. Fred says:

    Man, the link at the top is textbook thermonuclear social skills. More posts should have that tag.

  6. I wonder what Stephen Toub (https://social.msdn.microsoft.com/profile/Stephen+Toub+-+MSFT) and Stephen Cleary (http://blog.stephencleary.com/) would say about that suggestion. What’s the benefit of the state machine here?

    1. Mark S says:

      About the suggestion to use ‘async’? The key to understand is that async I/O is a different animal that isn’t consuming threads in the normal way. See this post, for example: https://blog.stephencleary.com/2013/11/there-is-no-thread.html

  7. M says:

    I understand that this article is probably more about async, but why not use normal threads for the Database requests? (dedicated database threads seems to be one of MSDN-text-book examples of when NOT to use the .NET thread pool)

  8. Alois Kraus says:

    An alternative is that you start your SQL queries as long running tasks which will never exhaust the threadpool because TPL will simply create a new thread for each long running task. I guess the customer is already having some way to control the number of concurrent SQL queries to not overwhelm the database.

    Task.Factory.StartNew( () => { Sql … }, TaskCreationOptions.LongRunning);

    This will “waste” a new thread but since the latency this construct is in the region of 0.1ms it is for slow SQL queries a non issue. The superior ThreadPool task enqueue performance plays in this case no role and you can do old fashioned manual thread creation.

Comments are closed.

Skip to main content