When DLL_PROCESS_DETACH tells you that the process is exiting, your best bet is just to return without doing anything

When the Dll­Main function receives a reason code of DLL_PROCESS_DETACH, the increasingly-inaccurately-named lpReserved parameter to is used to indicate whether the process is exiting.

And if the process is exiting, then you should just return without doing anything.

No, really.

Don't worry about freeing memory; it will all go away when the process address space is destroyed. Don't worry about closing handles; handles are closed automatically when the process handle table is destroyed. Don't try to call into other DLLs, because those other DLLs may already have received their DLL_PROCESS_DETACH notifications, in which case they may behave erratically in the same way that a Delphi object behaves erratically if you try to use it after its destructor has run.

The building is being demolished. Don't bother sweeping the floor and emptying the trash cans and erasing the whiteboards. And don't line up at the exit to the building so everybody can move their in/out magnet to out. All you're doing is making the demolition team wait for you to finish these pointless housecleaning tasks.

Okay, if you have internal file buffers, you can write them out to the file handle. That's like remembering to take the last pieces of mail from the mailroom out to the mailbox. But don't bother closing the handle or freeing the buffer, in the same way you shouldn't bother updating the "mail last picked up on" sign or resetting the flags on all the mailboxes. And ideally, you would have flushed those buffers as part of your normal wind-down before calling Exit­Process, in the same way mailing those last few letters should have been taken care of before you called in the demolition team.

I regularly use a program that doesn't follow this rule. The program allocates a lot of memory during the course of its life, and when I exit the program, it just sits there for several minutes, sometimes spinning at 100% CPU, sometimes churning the hard drive (sometimes both). When I break in with the debugger to see what's going on, I discover that the program isn't doing anything productive. It's just methodically freeing every last byte of memory it had allocated during its lifetime.

If my computer wasn't under a lot of memory pressure, then most of the memory the program had allocated during its lifetime hasn't yet been paged out, so freeing every last drop of memory is a CPU-bound operation. On the other hand, if I had kicked off a build or done something else memory-intensive, then most of the memory the program had allocated during its lifetime has been paged out, which means that the program pages all that memory back in from the hard drive, just so it could call free on it. Sounds kind of spiteful, actually. "Come here so I can tell you to go away."

All this anal-rententive memory management is pointless. The process is exiting. All that memory will be freed when the address space is destroyed. Stop wasting time and just exit already.

Comments (52)
  1. Adam Rosenfield says:

    And doing all that uses up extra battery power.  Apple has an iOS programming guideline somewhere saying that when your app is exiting, don't bother trying to free all of your memory to avoid wasting battery power, since the OS takes care of all that when you exit.

    The one case where you do want to free every byte on exit is if you're checking for memory leaks, but you should not be doing that in shipping software, only in debug builds.

  2. JS Bangs: If your program regularly allocates anywhere from 200MB to 2GB+ of RAM each time it's run and then tries to deallocate everything when it receives a shutdown notification from the OS, then it's doing a disservice to both the user and the computer by taking five to ten minutes to do all that unnecessary cleanup. Program XXX does this regularly for me, and while I don't run into the issue often (as I rarely close the program), I do run into it whenever I need to install an addon or update to the latest version. It can be very annoying sometimes.

  3. Mordachai says:

    This is a deep problem of current language design, IMO.  If this guarantee had been available many years ago, modern languages could well be designed with support for "system exiting, don't bother".  It could well be rolled into the standard runtime libraries that Microsoft supplies for Visual C++ and into .NET directly.

    Trivial uses of this concept can help modern software: for obvious large-buffer allocation one could easily write the additional code necessary to know whether to discard the buffers or actually deallocate them.

    But as a system issue, new language support (or at least library support) will be necessary IMO.

  4. avakar says:

    A simple solution would be to turn free into a no-op when the application begins termination.

  5. Peter says:

    I agree with JS.  I doubt the issue is that the app is "bothering" to do anything.  The memory clean-up probably happens as a natural consequence of the way the program is structured.

    That being said, I can think of a few ways you could deal with this situation, even using the RAII pattern.  For example, you could just call ExitProcess() to short cut the destructors.  You could also override operator delete so that it doesn't do anything on shutdown.

    Anyway, in general I don't think this is something you should worry about.  Take such measures only if you see a performance problem.

  6. Mijzelf says:

    My programs also always cleanup everything. This way you can check whether it has not been leaking. Of course this is only necessary in debug builds, but when you switch it off in release builds, this might introduce other bugs.

  7. Patrick Simpson says:

    I tend to set up my programs so all state is reached through one singleton object. In the debug build I destroy this when exiting, in the release build I don't bother. This gives me all the benefits of finding leaks in debug builds without taking ages to shut down in the release version.

  8. Anonymous Coward says:

    Leo Davidson: it should be safe to skip the destructors of global objects, regardless of what they do; after all you could experience a power failure, or the user could kill the process using Task Manager. Or Windows itself might kill the process; I've seen this happen when an application took too long to close on system shutdown.

  9. Joshua says:

    Exception: Shared memory used for IPC. Freeing it is irrelevant but you probably need to tell the other end you're hanging up.

  10. JS Bangs says:

    If you actually care about this issue in your app, then the thing to do is to manually perform whatever cleanup is absolutely necessary, then call TerminateProcess(). Boom, no more process, no extra cleanup, no dlls getting notifications, nothing.

    Obviously you shouldn't do this in a shared library.

  11. DWalker says:

    @Leo Davidson:  You're mostly right, but if you combine lots of small allocations within your program into a situation where you ask the OS for one large piece of memory, and then you use various parts of that larger piece for your program, then you are essentially writing a second level of memory allocation/deallocation.  You might have to handle the semantics of your program no longer needing one of the small pieces of memory within the larger chunk, and so on.  Are you going to rewrite all of that code which the language and/or the language libraries already provide?  

    The tradeoffs get complicated…

  12. Philip Taron says:

    It must be noted that Application Verifier http://www.microsoft.com/…/details.aspx will flag your application as having many memory and handle leaks should you do this. There are also resources, such as SQL connections or shared memory or other non-system managed but persistent and long-lived connections that you may truly have to close at this time. Perhaps this is analogous to removing highly flammable materials from your building before it is demolished.

  13. AsmGuru62 says:

    The solution is to use private heaps in the application code.

    Here is how C++ class should look in that case:

    1. Constructor calls HeapCreate() and stores a heap handle.
    2. Add a method GetMem() for a class – it will use heap handle for small allocations

    3. Destructor calls HeapDestroy() – it is better than 1000s of calls to HeapFree

    Now, make that class a base class for other sub-components.

  14. alegr1 says:

    Make sure to have your towel with you, though.

  15. jader3rd says:

    So how would you go about teaching and designing the proper behavior? I think that most 'good' programs will be modularized well and each module will know how to clean itself up properly, but in a way which allows for the application to keep running. But when the whole process shuts down all it knows to do is to call cleanup on all of the individual modules, which don't differentiate between proper clean up and "just flush the buffers because we're abandoning the building".

    Visual Studio is the worst offender because they send their SQM data on shutdown which can create some very long shutdowns. A process shouldn't start doing an action on shutdown.

    [I suspect Visual Studio is not sending SQM data during DLL_PROCESS_DETACH. They're probably doing it before the ExitProcess. Because issuing an HTTP request during DLL_PROCESS_DETACH will almost certainly hang. -Raymond]
  16. cleek says:


    god, i hate Visual Studio shutdowns. Friday afternoon, 5pm rolls around i'm packing up. i have to allocate five minutes to shut down the VS instances i've left running all week.

  17. cleek says:

    @Paul M. Parks

    some days, i do. but i always feel a little nervous about it. i don't want to interrupt VS if it's doing something important with my files.

  18. ulric says:

    it's common sense that you don't need to free ram on exit because is being freed anyway.

    But in practice, it's an horrible thing to get into or recommend to other developers.  Once you start leaking on exit because "it's ok", the developers no longer check for CRT leaks on exit (In VC++'s output window, for example) because there is too much stuff in there anyway, and software beings to leak memory all over the place. It's dumb to try to build a whole new testing system to try to avoid that, when the developers can test their code for years every time they exit the app from the debugger for free.

    @avakar Stubbing out 'free' during an exit (in Ship anyway)? That's a pretty clever idea.

    [It's more than just freeing memory. It's all the other random cleanup stuff that happens. If you try it during DLL_PROCESS_ATTACH, you may discover that your cleanup code calls into a DLL that has already been uninitialized, and then craziness ensues. See paragraph 4. -Raymond]
  19. Mason Wheeler says:

    Why single out Delphi here? If you try to access data after you've freed it you'll get unpredictable behavior in any language.

    [If you follow the link, you'll see that Burak was complaining that my writing is too C++-focused and he uses Delphi. So I figured I'd make this a Delphi-focused entry. -Raymond]
  20. SuperKoko says:

    @Mason Wheeler: Follow the link and search "Delphi" in the page

  21. Klimax says:

    C++ 11 got std::quick_exit for this. The only thing I don't know is when and how it'll be implemented. (and so how usefull it'll be)

    Ref: en.cppreference.com/…/quick_exit

  22. Joshua says:

    @benajmin: I saw one application that needed to be closed gracefully.

    It turns out the developer version of mumps db runs from the notification area, and if killed without a graceful shutdown, wait 30 minutes for it to consistency check its databases on startup.

  23. Garrick McPherson says:

    You still must be careful to dispose of the memory of the process. It cannot be guaranteed that all the memory the process allocated is freed because the process could have hidden some memory elsewhere (a favourite place is in USB memory sticks).  You should check for plugged in memory sticks before shutting down the process.

  24. JS Bangs says:

    The problem, of course, is that you may not be able to stop the cleanup tasks from running, or to eliminate them without greatly complicating your program. An obvious example: if your program is written in C++ and you've used RAII everywhere, then all of your destructors are going to run during process shutdown, and those will deallocate memory, close file handles, flush buffers, etc. You can't tell the C++ runtime "don't call any destructors during process shutdown" (and doing so would be asking for trouble later). You *could* have a global flag that says "process shutdown in progress" and have all of the destructors do nothing if the flag is set, but that's a lot of extra work to go through for what is, most of the time, a very marginal benefit.

  25. Jon says:

    Why specific to DLL_PROCESS_DETACH?

    Shouldn't the title be: "When your process is exiting…"

    "I regularly use a program that doesn't follow this rule." No name and shame? We all know you're talking about Visual Studio :)

    [It's specific to DLL_PROCESS_DETACH because in DLL_PROCESS_ATTACH you may not be able to guarantee that your DLL gets its DLL_PROCESS_DETACH before another DLL that you depend on. (And "name and shame" is explicitly contrary to the ground rules.) -Raymond]
  26. Leo Davidson says:

    Programs that spend a long time freeing memory — at process exit or otherwise — can often be improved by putting all those tiny allocations into one or more separate memory heaps/pools which can be thrown away in a single, fast operation when needed.

    It's often the case that the same problem affects other situations — e.g. closing a 'document' but not the whole program — so tackling the more general case seems better to me. And I agree with JS Bangs: I'd rather just write my code to work properly in the general case than have to write, test and maintain special-case logic to skip clean-up on shutdown.

    As was said in the main post, you can't skip *all* clean-up without inspecting what it does, so it's not as simple as killing the process and bypassing all destructors/clean-up code.

    If I have to think about which objects and sub-objects to destroy, why I'm destroying them, which parts of the code to skip depending on the circumstances, re-evaluating all those choices when child objects change (anyone who has had to add IDisposable to a .Net object that's already in use will know that problem), etc… then that seems like a lot of effort, all to help only one situation, when making the clean-up fast in all situations is often no more effort in the short term (and definitely less effort in the long term) and benefits more situations.

    I guess what I'm saying is, the fact that the program can't clean up its unwanted memory quickly is the real problem, not that it is cleaning up before shutdown.

    But, okay, I agree there are cases where the advise is sound. If you know that process shutdown is and will forever be the only time that a buffer will be freed, and you don't care about leak-checking tools, then it makes sense not to free the buffer. I just don't think it's a good idea in the general case, where you don't know those things.

  27. Timmy says:

    A few years a go you said: It is not good to leak memory and let the operating system clean up your mess and now you are saying leak-away ?!

    Take this blog post for example: blogs.msdn.com/…/3945339.aspx

    (bing search really do suck at finding stuff, even at microsofts own websites, gee)

    So what is it ? Is leaking memory / handels good or bad ?

    This is confusing.

    [Somebody who thinks they can clean up window handles in DLL_PROCESS_ATTACH has already messed up, because windows have thread affinity. -Raymond]
  28. voo says:

    @Timmy I wouldn't be nitpicking here if the design of your applications caused you to having to clean up window handles in DLL_PROCESS_DETACH. Talk about horrible design..

  29. Jim Lyon says:

    +1 for TerminateProcess. Especially during an abnormal exit. Especially in shutting down a multi-threaded monster.

    I've seen too many people waste too much time architecting and debugging a clean shutdown. It's seldom worth it.

  30. cheong00 says:

    @Timmy: Your DLL should provide function to cleanup handles that it allocated to perform it's function. The calling application should be calling them to free the handles before your DLL recevie DLL_PROCESS_DETACH. If the application do not call them, your DLL shouldn't attempt to free them for the application on DLL_PROCESS_DETACH.

  31. Evan says:

    @Anonymous coward: "it should be safe to skip the destructors of global objects, regardless of what they do; after all you could experience a power failure, or the user could kill the process using Task Manager."

    Of course, it's also understood that it's generally reasonably acceptable to lose data in situations such as those. And even if you don't lose data, that the program may have to do some extra work on next startup to check its own consistency.

  32. Paul M. Parks says:

    @cleek: You *could* just be a brute by closing your projects, opening task manager, and killing all instances of devenv.exe.

  33. benjamin says:

    This kind of reminds me of people that feel the need to exit any running program (including stuff in the systr–excuse me, notification area) before rebooting.

    It's *already* restarting, broheim, don't worry about it.

    What's funny is that I know people that continue this behavior with their phones, where they'll close any app on their iPhone before rebooting it.

  34. Joe White says:

    Of course, as you commented in a previous blog entry, if you've allocated lots of window handles or GDI handles (I forget which), it will take the OS a long time to free them, because the program *is* supposed to clean those up before it exits, and you don't optimize the OS for ill-behaved programs.

    Then again, you should probably be freeing your window handles well before you get to DLL_PROCESS_DETACH.

    [Right. If you're cleaning up window handles in DLL_PROCESS_ATTACH, then you messed up your design pretty horrible. After all, you don't control what thread gets the DLL_PROCESS_DETACH! -Raymond]
  35. Crescens2k says:


    Generally, in DLL_PROCESS_ATTACH you are doing the minimum required to get the DLL itself up and running. So the only things you should be allocating here are kernel objects (mutex, TLS, shared memory) and memory. These get cleaned up immediately after process termination so if you haven't cleaned these up then there is no real pressing concern. So what Raymond was saying is that during DLL_PROCESS_DETACH, don't worry about kernel objects or memory, these get cleaned up right away. It is less frustrating for your users if you close down quickly by not doing pointless cleaning up.

    What that article you linked to was refering to was user and GDI handles. There is a bit of a difference in how these are handled as they don't get cleaned up immediately like kernel handles do. But I guess this also requires understanding what you are doing in the DLL startup code, and when I said minimum required to get the DLL up and running, that doesn't include creating windows or allocating GDI objects. Anyway, part of the DllMain documentation explicitly states

    "Calling functions that require DLLs other than Kernel32.dll may result in problems that are difficult to diagnose. For example, calling User, Shell, and COM functions can cause access violation errors, because some functions load other system components. Conversely, calling functions such as these during termination can cause access violation errors because the corresponding component may already have been unloaded or uninitialized."

    so it is against good practice to even attempt silly things like creating the kinds of handles that the link was refering to.

    But to clear up two things. This is talking about DLL specific cleanup not process general cleanup. So you shouldn't take this in the context of what happens when the entire application is exiting, but in the brief period of time during the entire applicaiton exiting where the single DLL that you wrote is processing DLL_PROCESS_DETACH. Secondly, Raymond's posts often require you to use some common sense and if you have something that requires cleaning up then you should do it anyway regardless. He never used words like must or required. He used words like should or best bet. This indicates that it is advice, and that you are free to ignore it, but please seriously concider doing it.

  36. Jon says:

    @Raymond, re: "It's specific to DLL_PROCESS_DETACH because…"

    If I understand you correctly, you're saying "Don't bother sweeping the floor because you can't do it correctly in DLL_PROCESS_DETACH anyway, and also you're wasting time because the building is being demolished."

    My previous comment was trying to ask, isn't the "wasting time" argument sufficient? Would it be reasonable to say "Don't sweep the floor if you know the building is being demolished (whether or not you're in DllMain)"?

    [Sure, you could generalize it if you wanted to. But remember, good advice comes with a rationale so you can tell when it becomes bad advice. -Raymond]
  37. Crescens2k says:


    I have actually read advice before that if a service is shutting down, especially on system shutdown to do it as fast as possible and just leak whatever you need to shut down as fast as possible.

    So you could easily extend it to process shutdown in general, as long as you stick to kernel handles and memory then there should be very few consequences. (Of course, common sense applies here too.)

  38. ErikF says:

    @Crescens2k: I don't trust DllMain() for setup or teardown anymore because I was bitten long ago when I wrote a system hook DLL. Originally I had the setup logic in DLL_PROCESS_ATTACH, but often the hook procedure would be called before DllMain got a chance to get things going! Fortunately I didn't have a need for cleanup in that DLL.

  39. Ian Boyd says:


    Remember that turning Free() in a nop doesn't solve the problem. Assume i start with a pointer to a list of objects that has been paged out. i can't just free the pointer, i must free the objects. So now i have to page-in my list of 10,000 customers. That's because each customer contains pointers to more data (For example a CustomerName string). Then each Customer can have a list of contacts. Each contact must be paged in, so i can free them.

    The expensive part isn't the actual freeing of memory. The performance hit is iteratively following every pointer down, chasing down every last pointer, until you've traversed every object you have allocated. That recursive process is a waste.

    But it's 10,000% worse if those pages have been swapped out to disk.

  40. harningt says:

    Just a note – we've been bitten by a bug in Outlook since it calls TerminateProcess to kill itself to avoid cleanup slowdown… problem is… PCSC (via winscard.dll) requires 'clean' shutdown, else device transactions are held open. We ran into problems w/ leaked transactions and put something in DllMain and noticed afterwards that it just isn't called… further debugging w/ Depends let us see that no Dll cleanup was performed at all on shutdown – yielding the TerminateProcess conclusion for Outlook 2010. From a KB article about a 2007 'slow shutdown fix', I am guessing in Outlook 2007 they switched to TerminateProcess to close out so that Outlook plugins/etc couldn't slow things down (too bad Outlook or the KB doesn't document the low-level reality).

    Technically PCSC should know the process is dead, but we had to take other complicated measures to perform the clean up.

    If only there was another call made on shutdown to notify DLLs that they are going to be dead soon, so perform some nice cleanup before closing out kills all threads/etc…

  41. alegr1 says:

    @Garrick McPherson:

    process could have hidden some memory elsewhere (a favourite place is in USB memory sticks).

    I'm afraid what you're saying makes no sense.

  42. alegr1 says:

    This kind of reminds me of people that feel the need to exit any running program (including stuff in the systr–excuse me, notification area) before rebooting

    I guess they're conditioned into that by a "helpful" Vista+ list of "hung" applications, that doesn't allow you to respond to the application prompts for saving files. Me, at least.

  43. Roman.St says:

    My god Raymond, this is so true. I've always wondered why C++ programmers bother so carefully to free their static global objects just before the program exits anyway. That seems like a most pointless way of spending CPU cycles.

    It's a real shame that essentially nobody distinguishes mandatory cleanup (like flushing buffers) from optional resource-freeing (like memory and handles)…

  44. Gabe says:

    romkyns: Actually .NET now has the notion of "critical finalizers", which refers to things that really need to perform mandatory cleanup. This is used to ensure that buffers get flushed, but you could also use one for deleting temp files, etc.

  45. Gregory says:

    "I regularly use a program that doesn't follow this rule. The program allocates a lot of memory during the course of its life, and when I exit the program, it just sits there for several minutes, sometimes spinning at 100% CPU, sometimes churning the hard drive (sometimes both)."

    I know what you've been through. I'm also using Program X a lot.

    [You seem not to have gotten the hint when I deleted your first comment for violation of the Ground Rules linked to from every page, so this time I'm leaving your comment up but with the violation removed. And nobody has yet guessed correctly. NOT THAT I WANT ANYBODY TO BE GUESSING IN THE FIRST PLACE. Because like it says in the ground rules, "the purpose is to discuss problems and solutions, not to assign blame and ridicule." -Raymond]
  46. Neil says:

    Apparently even the CRT calls free on shutdown, if the comments in mxr.mozilla.org/…/Makefile.in are to be believed. (I couldn't get the debug 32-bit CRT to call free_dbg on shutdown, but I could get the debug 64-bit CRT to.)

  47. Daniel says:

    @romkyns: That's the total opposite of what I've always heard from C++ fans, who love the "uniform treatment of resources" that RAII provides.

  48. Joshua says:

    The trick to make shutdown work with RAII is to call exit() from high on the stack and never fall off main. Then most RAII cleanup never happens. You have to go out of your way to flush buffers unless you're using stdio or iostreams which is hardwired to know about it.

  49. don't be so sure says:

    This isn't always true. Plugins implemented via dlls will unload via process detach, but it doesn't mean the process is exiting.

    Attach a debugger to winamp or somthing and watch what happens when you fiddle with the plugins.

    Freeing memory and closing file handles isn't going to peg your cpu so long you notice or care. Something else is probablt going on with your favorite prgram

  50. "Freeing memory and closing file handles isn't going to peg your cpu so long you notice or care. Something else is probablt going on with your favorite prgram"

    It will if it's a long-running app with either a memory leak or a long history implemented by something like a linked list. If it's, say, Raymond's web browser/music player/text editor, keeping a 4 kb block of current state with a pointer to the previous state etc back to the start, you could easily end up with lots and lots of disk thrashing. Each of those blocks will have been paged out – potentially at different times, hence to different bits of the pagefile – at which point every single block will involve a disk seek. That 1 Gb undo/history buffer which seemed so reasonable at the time just became the cause of a quarter-million disk seeks, tying up a typical drive for 20 minutes in the worst case! (In practice, I'm sure many of those page-ins would happen to be readahead cache hits, but it's still painful.)

    Maxing out the CPU as well as/instead of the disk? Someone complained about the app taking a gigabyte of RAM/swap, so the developer switched to storing it in a file – and to deal with complaints about the size, stuck in a simple compression library. Whoops: now each time it frees a single block of data, it's uncompressing and recompressing a bigger block of data and writing that back to disk.

    I don't know or particularly care what Raymond's specific 'X' is: the important thing is that we can all imagine what it's doing and why. The programs MNGoldenEagle, Gregory and Raymond are thinking of probably aren't the same; I've seen this myself, in a program Raymond probably doesn't run. It's possible they all get this from linking to the same library for something – more likely, it's just a common mistake.

    In fact, I seem to recall either Raymond or Mark Russinovich had an example program which allocated a silly number of Windows objects, then exited (optionally being a 'good' program by cleaning itself up first), and letting the system clean up the resources implicitly by exiting was much, much faster than doing it yourself first. I can't seem to find the post right now, though.

  51. Crescens2k says:

    @don't be so sure

    The thing is, he wasn't talking about that case since it is easy to distinguish between when DLL_PROCESS_DETACH is being called because of FreeLibrary and when it is being called at process exit.

    In this case Raymond was not talking about the DLL being dynamically unloaded, he was talking about it being unloaded as part of process shutdown.

    The very first thing you do in the detach is check the lpReserved parameter.

    if(lpReserved == NULL)


       //free memory and resources here


    You see, it is clearly documented that that parameter is used to determine the difference between dynamic loads/unloads and static loads/unloads. Anyway, common sense dictates that you would make sure you do this in a dynamic unload anyway.

  52. Tanveer Badar says:

    @alegr1 About hiding memory in USB, I really hope the original comment was meant as a joke.

Comments are closed.

Skip to main content