You don’t optimize for the case where somebody is mis-using your system


Commenter Rune Moberg asks why it takes the window manager so long to clean up 20,000 leaked objects.

Well, because you're not supposed to leak 20,000 objects. Why optimize something that people aren't supposed to be doing? That would just encourage them!

Actually, this scenario is doubly "don't do that". Creating 10,000 was already excessive; 20,000 is downright profligate. And in order to create 20,000 leaked objects in the first place, you have to override the default limit of 10,000. Surely this should be a clue that you're taking the system beyond its design parameters: After all, you had to go in and change a system design parameter to get this far!

The window manager does clean them up eventually. Nothing goes wrong, but the experience is frustrating. Hopefully that'll be a big clue that you're doing something wrong.

Nitpicker's corner

Of course, you have to apply the principle of "don't optimize for abuse" with your brain still engaged. If the abuse can lead to a denial of service attack, then you have a security issue. (That's not the case here, however. In order to leak 20,000 windows, an attacker has to be able to create 20,000 windows directly, rather than going through an intermediary such as a scripting engine, since the scripting engine will destroy the window rather than leak it. If an attacker can create windows directly, then why bother creating 20,000? Create one and use it to annoy the user.)

Comments (29)
  1. Human wasteland..... says:

    I got the distinct impression that the OP’s question related to the difference between leaking them, and freeing them. Not only that but allocating them and freeing them.

    Presumably, in the foreground (cleanup manually), it takes longer to do them, because there is a bunch of stuff also happening in the background.

    I’d also guess that in the background (cleanup leaks) it locks up the UI, because the thread that performs this function is promoted to a high level, hence the faster time.

    I have no idea how much checking windows does, but if you create a window, all that has to be done is allocate, possibly zero memory, and intialise.

    When you free, you at least have the overhead of merging free blocks on the heap, and otherwise maintaining the free list.

    If windows does any checking at all, then it also has to look at and deal with anything that you added to the window.

    Compared with a plain window created (a one shot excercise) deletion has to deal with all the things that could have been done (even if they were not) in order to return to a known state.

  2. Human wasteland (again)..... says:

    Epilogue;

    As for the validity of doing this, I always thought it was a bit like one of those "cool guy" twitches, like Elvis had.

    I mean, whenever I’m writing an application, I always make sure that it creates 20,000 empty windows and abandons them, right before exit.

    :))

  3. Mike Dimmick says:

    This doesn’t mean you can’t have 20,000 items in your main window that the user can see/interact with – it just means they shouldn’t be windows.

    A lot of people seem to be afraid of designing their own window classes. Maybe they’re afraid of WM_PAINT? I myself have no qualms over writing my own controls. I once wrote something similar to a masked edit control (I wanted a numeric input field with a decimal point in a fixed place, for inputting amounts of money) in C# on .NET Compact Framework, deriving only from Control. It doesn’t handle selections, but then it doesn’t need to in that application.

  4. Cesar Eduardo Barros says:

    Mike:

    It doesn’t handle selections, but then it doesn’t need to in that application.

    That’s precisely why people do not want to design their own interface elements. There are too many details you can forget (perhaps you never use the keyboard and don’t notice it misbehaves when tabbing into it, perhaps it doesn’t work well in exotic locales you never heard of, perhaps it does something wrong on high-DPI screens, and so on), and it probably has more bugs (more widely used code receives more testing, and thus even obscure bugs on it will eventually be found by someone). It is better to reuse instead of creating your own. Of course, if there is nothing which fits your needs, you have no choice but to write it.

  5. Tom Parker says:

    What a daft question.

    Creation is just make something and dump it somewhere.

    Destruction is always rummage around to find the thing you want to destroy and then destroy it.  Unless you’re using an array (not usually a good choice) the rummage around it will take time often a time proportional to the number of items.

    [Finding the thing isn’t hard. It’s untangling it from all the other stuff that’s hard. By analogy: Create a web page on your e-commerce site: Easy. Destroy that web page: Hard, because you have to go fix all the links to that page from elsewhere in the site. -Raymond]
  6. I get the point but I still think this is a valid question.

    The question mentions destroying the windows and also leaking the handles, both. And it says window manager is actually faster in destroying the handles if leaked (with stalling side-effect). Destroying the handles yourself, as you are supposed to do, is slower.

    10000 handles per process is excessive but lets say you have 10 processes using 2000 handles each (or 20 processes with 1000 handles each) and you get legit 20000 handles which will take quite a time to destroy. Will that be beyond design parameters? If yes, then I’d say even the possibility of having that many handles possible is bad design. If no, then why isn’t this optimized or takes so long (as opposed to creating them)?

  7. John says:

    Doesn’t hacking stuff in the name of backwards compatibility also encourage people to do things they shouldn’t?

  8. Cody says:

    [Doesn’t hacking stuff in the name of backwards compatibility also encourage people to do things they shouldn’t?]

    No, it avoids punishing users for the mistakes of developers.

  9. What a daft question.

    If you had read recent related posts, you’d know that ‘finding’ will indeed be easy. Bottom 16 bits of an handle is an index into a table.

    In my experience, usually creation takes more time, you need to do initialization work. Destruction is, most of the times, just freeing some memory.

    It’s untangling it from all the other stuff that’s hard.

    That would’ve been a better answer to a valid question.

    The thing is called ‘Windows’, a window seems to me as the building block. I’d expect both creation and destruction of a window to be optimized fully.

  10. CDarklock says:

    I felt like the question was a little different, and came in two parts.

    1. Why, in 64-bit Windows, is it actually faster to leak window handles than to properly destroy them? (This seems obvious: think about how you destroy windows. You send a message. That yields control to the O/S, which then rond-robins through other available applications. When the O/S cleans up, it doesn’t send messages, it just kills the windows. Since it’s not yielding to other applications, it gets things done faster, but it’s also blocking the UI.)

    1a. (implied by the answer to 1) Why does the O/S block the UI at all? (Duh: so it can clean up faster. THIS is the "optimising for abuse" question. If you leak a little, the blockage is minimal. If you leak a lot, this is BAD and you should FIX it. Complaining about the blockage is like complaining that your Ford Focus is sluggish when towing a boat.)

    1. Why does the O/S have such a large performance difference between cleaning up 18,000 handles and cleaning up 20,000 handles? Take as read that either case is bad, but what is the underlying system that goes from "works fine" to "sucks big time" as you cross that line?

    I think question 2 is a valid question, not as someone designing an application who wants to know why exactly he shouldn’t create 20,000 windows, but as someone curious about a system who simply wants to understand it. The information probably has no practical use, but we’re geeks! We like to know how things work! Come on, man!

  11. Miles Archer says:

    Burak,

    What the heck does "optimized fully" mean?

  12. Peter says:

    Miles: "Optimised fully" means you put #pragma optimise=full just before that bit of the code, which sprinkles magic pixie dust on it to make it go faster.

  13. > What the heck does "optimized fully" mean?

    Actually I meant, 'optimized to the maximum extent possible for performance and memory usage carefully calculating trade offs, removing any possible bloat'. May be hand optimized in assembly for target CPUs, some internal structures or algorithms may be changed for better performance, even some parts can be redesigned, these sorts of things.

    Of course, a profiler will show what part to optimize. But a window is a very common object in Windows, I'd assume it's a good idea to optimize window creation and destruction, without doing any profiling.

    If destroying 20000 window objects is taking too much time, it means destroying a single window is taking too much time.

    Raymond says it's taking too much time, because it's not designed/optimized for that.

    Why is there so many handles available if they aren't meant to be used (by many processes)? If they are meant to be used, why is Windows not optimized to handle destruction of window objects? I thought it was optimized, but Raymond says it isn't… (To be fair, this post mentions only leaked objects and I agree with the reasoning. On the other hand, the question also mentions normal window destruction with same results).

    [Window creation/destruction is not part of any sane program's inner loop. -Raymond]
  14. Cooney says:

    No, it avoids punishing users for the mistakes of developers.

    You gotta punish someone, or would you have windows spend billions figuring out how to mollycoddle every guy who picked up a copy of VC++? Why shouldn’t a crappy application perform and behave crappily?

  15. Rune Moberg says:

    As I mentioned in a previous comment, the app I work on happen to present the user with several pages of windows. The user will add window-by-window, page-by-page and we cut window handle usage to a bare minimum. Delphi’s TPanel control uses a handle, so we stopped using TPanel on our forms. Our own home-brewed custom controls only use a window handle if absolutely necessary. etc..etc..

    Our worst enemy was the desktop heap size. We exhausted the desktop heap long before running out of handles. At one point (two years ago when I first asked) I also looked into handle usage, but it was a red herring as we didn’t use more than a couple of thousand handles (I don’t recall the number off-hand and some of our users might’ve touched the 10000 limit for all I remember)

    For us it wasn’t about leaked handles. Some of our users track _a lot_ of information sources, and we left the window management to Windows’s window manager. (which I assume most developers do by default)

    Which brings us full circle to my question on yesterday’s subject of the 10000 handle limit: At what point are we supposed to do the window manager’s job? (we’ve done that in our app now, so don’t worry about us anymore…)

    As I see it, the 10000 handle limit was set in a time where 15″ monitors were the norm. I think it is safe to assume the average windows installation uses a lot more handles now than, say, five years ago. Also note that some people like to use (waste) additional window handles for things like Winsock communication… (And I’ve never seen anyone from MS calmly explain why such a practice is bad — for all I know it could be the most efficient way of doing socket communication under Windows, but I hope it isn’t the case)

    Pre-emptive defense: The default 32-bit desktop heap size will not accomodate more than approx 30 instances of IE 5.5 (showing a web page from a news site). So alright, the handle limit makes sense still, but I still dislike the default heap size. Both for the interactive desktops and the one for services. 64-bit Windows luckily solves this.

    Rune (on vacation — going to see The Rolling Stones in Budapest tomorrow)

    [If you’re looking for something specific like “You should switch when you reach 1793 windows” then you’re not going to find it. Windows are not cheap objects, and that article discusses the trade-offs. -Raymond]
  16. steveg says:

    pragma optimise=full

    Topic suggestion: "Top 10 #pragma commands you never knew"

    #pragma bugfree

    #pragma youknowhatImean

    #pragma hidefromboss

    etc

  17. Rune Moberg says:

    “Window creation/destruction is not part of any sane program’s inner loop”

    Funny you should say that…

    It is now.

    As we can’t hide/show windows when the user switches to a different page (because we exhaust the desktop heap), we effectively destroy a bunch of windows and create some new ones. (we define “page” as a collection of windows — the user can quickly switch to different information views this way)

    Of course, it isn’t anywhere near 10000. Let us say a couple of hundred are destroyed and a couple of hundred are created.

    [Zero to within experimental error. Not enough that will show up on profiling runs of typical usage. -Raymond]
  18. Dean Harding says:

    If destroying 20000 window objects is taking too much time, it means

    destroying a single window is taking too much time.

    That’s not nessecarily true. And in fact, in your original comment you explicitly point out:

    The sweet spot is somewhere between 10000 and 18000 handles.

    Any more than that and the window manager goes bananas.

    So between 10,000 and 18,000 handles is "fast" but 20,000 is extremely slow — it’s clearly not a linear increase and therefore, destroying a single window is not nessecarily "taking too much time".

    For all we know, Windows might simply have a "if (handle count >= 20000) { do_it_really_slow(); }" somewhere as a punishment for developers who do bad things. That’s what people are always asking for, isn’t it? Punish developers who write crappy software? Isn’t it?

  19. 640k says:

    Windows are not optimized for handling windows.

    rotfl.

  20. Waste humanland..... says:

    What’s the beef here?

    I’ve written a few controls for my sins, including, as it happens, a tree control.

    It’s a scary prospect doing something like that when you consider all the things you could do wrong!!!!

    Nevertheless, what I did in the end was to tile the thing up. So I have an object that can draw it’s self in a window (the control), and if I want to float that control higher up in the z-order, I create a new window on the desktop, which floats in the right place above the tree control. The little floating jobbie, gets its draw from the big tree control window.

    O.K. that’s not complicated, but where exactly is the problem with having multiple disparate objects that draw into multiple disparate windows.

    I always figured that windows windows are objects that reflect the hierachy of your application, rather than the tools to do the job.

    Surely that’s what this is all about. Writing the code that does the difficult stuff, quickly. If you’re still constraining yourself to the Document / View model when you get down to the level of convolving bitmaps, you really cant expect to have the performance too.

  21. steelbytes says:

    detouring from window handles back to the title of this blog entry …

    one time you may want to optimize handling/cleanup of misuse of your app/os, is when it could otherwise lead to a Denial-Of-Service.  If it takes 10mins to respond to a request, then that could be considered DoS

    [Sometimes I wonder why I even bother with the Nitpicker’s Corner. -Raymond]
  22. Dewi Morgan says:

    Steelbytes has a point: DoSes should be "fully optimised" so that we can be DoSed faster! More seriously, if you optimised for the DoSing case, then the DoSser would have to write "200,000" instead of "20,000" – a whole ‘nother character! They’d never think of that!

    Burak KALAYCI: "optimised fully" presumes there’s a "fastest" way to do something in all cases. But "optimisation" usually means "picking the best tradeoff". Odds are, window destruction’s optimised for the case where few are destroyed (normal use), rather than lots (something broken). CDarklock points to one possible tradeoff: blocking the UI. I’m sure there’re others.

  23. KenW says:

    Burak: "If destroying 20000 window objects is taking too much time, it means destroying a single window is taking too much time."

    Faulty logic. This makes as much sense as saying "If $20,000 is too much money, then $1 is too much money."

  24. Window creation/destruction is not part of any sane program’s inner loop.

    Exactly. When I run 20 instances of an app that uses 1000 windows, I’ll have all 20000 handles to destroy at the time I’m closing them… I understand that this would be heavy usage and Windows isn’t optimized for this. I was just surprised by the fact. There are so many handles available for use, but then you are not supposed to use them…

    Dean Harding:

    in fact, in your original comment you explicitly point out

    I wasn’t the one who asked the original question.

    it’s clearly not a linear increase

    I hoped for the best and assumed it was linear.

    Dewi Morgan:

    But "optimisation" usually means "picking the best tradeoff".

    The following was my description:

    ‘optimized to the maximum extent possible for performance and memory usage carefully calculating trade offs, removing any possible bloat’

    KenW:

    Faulty logic.

    "If $20,000 is too much money, then $1 is too much money."

    I’ll admit it’s incomplete. If you were to donate me $1 every millisecond, for the next 20000 milliseconds, then maybe $1 is too much money because $20000 is too much money :)

    What I meant was pretty clear, I think.

  25. Cody says:

    [You gotta punish someone, or would you have windows spend billions figuring out how to mollycoddle every guy who picked up a copy of VC++? Why shouldn’t a crappy application perform and behave crappily?]

    I believe the classes of punishable entities are as follows:

    User – Doesn’t care who gets punished as long as it’s not him.

    Developer – Would rather the user not get punished but really doesn’t want to get punished himself, so he’d rather MS take the punishment.

    MS – Doesn’t want to punish the user and in most cases punishing the developer is not an option (MS can’t force developers to do anything) so they have to eat the cost.

  26. Dean Harding says:

    > When I run 20 instances of an app that uses 1000 windows,

    > I’ll have all 20000 handles to destroy at the time I’m closing them…

    How do you know that’s the same situation, though? The only situation that has been tested so far is a *single process* destroying 20,000 windows. That’s not nessecarily the same as 20 processes destroying 1,000 windows.

  27. How do you know that’s the same situation, though?

    I don’t.

    I hope DestroyWindow is optimized for destroying top level windows. (With that I mean, it treats the case as special, and takes advantage of the special situation when destroying childs).

  28. carlso says:

    > Well, because you’re not supposed to leak 20,000 objects. Why optimize something that people aren’t supposed to be doing? That would just encourage them!

    It is this kind of thinking that makes Windows behave so poorly much too often.

    Your basic premise is flawed.  You assume that if a program is leaking 20,000 objects, it is doing it on purpose and was designed that way.  Most likely, a program is doing this because of some bug that the developer and testers did not come across.  It is not an intended action of the program.

    A well-designed OS can efficiently take care of good programs gone bad.  A mediocre OS assumes that all programs will be well-behaved and never do anything outside the bounds of stated rules.  When programs behave badly, often it is not intentional.  But if you do not handle such cases efficiently, it is the user of your operating system that suffers.  Saying that “we do not optimize for someone mis-using our system” is a cop out.

    For example, it’s easy to settle for an O(n^2) algorithm by rationalizing that it’s okay because n will never be large.  But you have no guarantee, do you.  If you allow a program to create a condition where n is large, then you need a better algorithm to deal with the situation.

    It’s interesting that most people’s view of Windows falls along the same lines.  That is, in most cases, it performs well.  But sometimes, it really stinks.

    [You have to decide what you’re going to optimize for. An algorithm that works well for bad programs may not work well for good programs (e.g., constant overhead too high). Windows prefers to optimize for good programs. -Raymond]
  29. carlso says:

    > You have to decide what you’re going to optimize for. An algorithm that works well for bad programs may not work well for good programs (e.g., constant overhead too high).

    Perhaps you need two algorithms?

    Progress is made when someone doesn’t accept their given constraints.  Constraints provoke creativity and the creative solution.

    [Two algorithms still has a cost. -Raymond]

Comments are closed.

Skip to main content