How bad is it to delay closing a thread handle for a long time after the thread has exited?


A customer has a component that creates a thread with Create­Thread in order to do something, and eventually that thread exits normally. The code hangs onto the thread handle for the lifetime of the component, because the component wants to wait until the worker thread has fully exited before it will shut down. The component eventually closes the thread handle, but it may take a very long time before the handle gets closed.

The customer's question was basically, "How bad is it to delay closing a thread handle for a long time after the thread has exited?" They were concerned that failing to close the handle would have a noticeable impact upon the host process, like leaving a megabyte of memory reserved for the thread's stack.

On the other hand, if the impact is de minimis, then the customer would rather not add complexity and tinker with code that has been working just fine so far.

Fortunately, the answer is, "It's not that bad." When the thread exits, nearly all of its resources are released. There may be some straggling resources like a (now-empty) data structure to keep track of the outstanding I/O for the thread, and data members to record the thread's exit code, thread times, security descriptor, processor affinity, and other miscellaneous information.

But it's not a significant amount of extra data in the grand scheme of things if you're going to have only one of these "long-lived thread handles" per process. Just don't make it a habit. (Now, if you're going to have thousands of them, we may need to talk.)

Comments (12)
  1. henke37 says:

    Also note that you might want to check if the thread is in an otherwise dead process. The thread might refer to a few things in the process, preventing it from being cleaned up.

    1. Harry Johnston says:

      I don't think so; it certainly won't stop the process from exiting. Having a handle to a thread probably results in the kernel keeping the process object as well as the thread object, but that's still not all that much overhead. Besides, in this context the thread and the handle both belong to the same process, so when the process exits the handle will be closed anyway.

  2. Piotr says:

    What's an overhead of a thread? I once had an application that created a thread whenever an event happened, but never cleaned those threads up. Even though the threads did nothing - they opened a socket each and kept listening, waiting for a message but never got anything because the communication ended long ago. Those "idle" threads packed the CPU with 100% usage just by being there. Could it have been context switching?

    1. If the threads are still active then there is still the potential for them to leave idle status (if waitForMultipleObjects has a timeout, for example). Have a large enough number of threads sitting around doing a tiny amount of work and eventually it'll use up the majority of your CPU.

      1. Piotr says:

        They fired up every minute to check the socket and go back to sleep. And there were over 10 000 of them after half a day (at this point I had to restart the service).

        1. Ken Hagan says:

          There are only 60,000 milliseconds in a minute. If you have 10,000 threads doing something every minute, the something doesn't need to be very large. A context switch, a kernel transition or two (to check for IO), and swapping even the minimum portion of 10,000 different stacks might do it.

        2. DWalker07 says:

          Well, there are only 1440 minutes in a day........

          1. doesntsoundgood says:

            Just speculating, but are you just counting the threads created directly by the application in response to the event, or are you also including any additional threads that might have been spun up by the system or related components for the work done by those worker threads (eg. the socket left opened by each thread)?

            Anyway, having an increasing number of active (ie. non-exited, idling or otherwise) threads lying around opening and holding onto resources is usually not a good thing no matter how you look at it. It is quite different from holding onto handles of already-exited threads for which nearly all associated resources have been cleaned up.

          2. ismoderationbroken1 says:

            Just speculating, but are you just counting the threads created directly by the application in response to the event, or are you also including any additional threads that might have been spun up by the system or related components for the work done by those worker threads (eg. the socket left opened by each thread)?

            Anyway, having an increasing number of active (ie. non-exited, idling or otherwise) threads lying around opening and holding onto resources is usually not a good thing no matter how you look at it. It is quite different from holding onto handles of already-exited threads for which nearly all associated resources have been cleaned up.

            [Moderation is not broken. It's slow. Because I'm on vacation. -Raymond]

        3. voo says:

          I think it's fair to say that if you have a situation where you end up with 10k threads something went wrong in your application design. Every thread requires not insignificant resources.

          1. Piotr says:

            It was a bug - the guys at dev created a "manager" object on each request, even though it should be a singleton and never called Dispose(). I was asking about the overhead, because when the thing happened on the server, I tried to slice and dice the thing with Windows Performance Toolkit and I could not find any indication of why is my CPU so hot. Only by chance I noticed in Resource Monitor that the Threads column has a number that's kinda too big. By inspecting the threads' stacks in WinDbg I could conclude that most of the threads sit on the same ThisThirdPartyLibraryThingieDoer() method and we found the problem. But WPT could not show me anything like "hey, the CPU usage is caused by context switching", so I thought that maybe there's some other way in which lots of threads that do virtually nothing could swamp the CPU.

  3. santoshsa says:

    Thousands of them is possible if it is an RDS server with 150-200 concurrent sessions.

Comments are closed.

Skip to main content