When you commit memory, you get a commitment to receive memory when you need it, but no sooner


When you allocate memory, say by creating a shared memory section or by calling VirtualAlloc with the MEM_COMMIT flag, what you have is known to the memory manager as a commit. The memory manager has promised that if you try to access that memory, the memory access will succeed. But that doesn't mean that the memory manager knows exactly which memory chip on your motherboard it's going to use. (Yes, I know that the memory is actually spread out over multiple memory chips, but I'm talking metaphorically. If you want to get all nitpicky, I'll call them physical page frames since that's what they are.)

This is actually obvious if you think about it. After all, if the memory gets paged out, and then paged back in, it might get assigned to a different physical page frame when it comes back. Therefore, the memory manager cannot promise "Okay, if you try to access memory at address 0x00900000, I will map that to physical page frame 0x471."

But what if I turn off paging by disabling my page file?

Sure, that takes the "page it out and back in" scenario out of the picture, but the memory manager still doesn't decide which page frame to assign to your virtual linear address until you actually access it.

Think of a memory commitment as obtaining a contract with a cleaning service to clean your room. You hire them to clean your room a week from Saturday. At the time you sign the contract, the cleaning service will typically not decide which member of its cleaning staff will come to your room. It just remembers that it has committed to cleaning your room, and it ensures that there will be somebody available to do it. For example, if they have a staff of ten people, and it takes a day to clean a room (hey, it's a messy dorm room), then they won't book more than ten jobs per day.

Why don't they pre-assign the jobs? Well, different members of the staff may have different skills, and not all jobs may require all the skills of a particular staff member. For example, suppose the staff consists of two people, Alice and Bob. Alice is good at carpets and Bob is good at blinds. Your dorm room has neither. If the cleaning service decided to assign your job to Alice at the time you signed the contract, then it is taking the risk that it may receive another offer for a carpet job and it would have to turn it down because Alice is already assigned to your room. On the other hand, if it assigned Bob, then it's taking the risk that an offer would come in for a job that included blinds. The best thing for the cleaning service to do is not to assign anybody yet, but still remember that it needs to assign somebody by Saturday morning, so it knows it can accept one more job, but not two. (And it can't accept any jobs which involve both carpets and blinds.)

The memory manager does the same thing. It knows that it needs to assign some page frame to your process, but it doesn't make any decision until you actually use the memory, so it can have maximum flexibility to satisfy other memory allocations in the meantime. And if you free the memory without ever accessing it, then the memory manager didn't need to make the decision at all!

For example, a device driver might request some memory with a particular characteristic; for example, it might ask for memory below the 4GB boundary (because it's communicating with a hardware device that doesn't support addresses above 4GB), or it might ask for 64KB of physically contiguous memory. If the memory manager had decided ahead of time to assign physical page frame 0x471 to your process, it might have to turn down one of those other requests because your page frame was the only one available that fit the bill.

As one of my colleagues explained, "Commit is the system's guarantee you will get a page when you need it. In giving you this guarantee, it does not have to give you the page yet." Commit is merely a promise that when you need the page, it will be there.

The fire department promises that if there's a fire in your house, they will come to try to put it out. But that doesn't mean that there's a tank of water at the fire station with your name on it.

Comments (44)
  1. Alexandre Grigoriev says:

    On the other hand, when a thread stack is reserved, it gets committed in a single page increments (there is initial size committed, though). If the system is running low on memory, stack commit may fail for a thread with big stack usage (and the thread will get crashed).

    [Not sure what's so "on the other hand" about this. The failure happens at commit, because that's when the promise is made. -Raymond]
  2. dave says:

    But what if I turn off paging by disabling my page file?

    In the net circles I travel in, there is a common misunderstanding that "no page file" equals "turning off paging".

    Not so, of course. (1) Stuff still gets allocated to real memory in page-sized pieces, which is the real meaning of "memory paging", and (2) Pages can still be evicted from memory as long as they are backed by something other than the page file (like, for example, the .exe file the program code is running from).

  3. dave says:

    P.S.  Kudos for saying "page frame".

    I hate the locution "physical page". It is a contradiction in terms. Pages are virtual, page frames are physical.

  4. bahbar says:

    I’m wondering what triggered that post. After all, you don’t care at all what physical memory you get when user-land code. Was the original question specifically for kernel-mode ?

    [I forget the exact situation but it was something like “I allocated half a gigabyte of memory but Task Manager’s ‘Physical Memory Free’ didn’t go down.” -Raymond]
  5. bahbar says:

    @dave

    "pages are virtual"

    Well, not in my book. Ahah :D

  6. Does Windows allow you to overcommit memory?

    Under Linux, you can ask for more memory than exists in both physical and swap memory.  The OS will happily commit to giving it to you, even though it can’t possibly service that request.  It isn’t till you actually try to write to that memory that the manager goes out to try to find where to store it.

    This made for some interesting crashes where a machine would run for weeks, but after a while enough pages were dirtied up that the machine would suddenly run out of memory.

  7. Nawak says:

    "Does Windows allow you to overcommit memory?"

    Reading Raymond’s post, it would seem that no, you can’t… else what promise would be made exactly?

    To try?

  8. Echobeach says:

    I wonder how ReadyBoost fits in to all of this. Is it considered to be "as good" as real memory? Or swap?

  9. Josh says:

    IIRC, ReadyBoost provided memory is a partial mirror of the page file, not a supplement.  Everything that goes to a ReadyBoost cache is also written to the page file.  As such, it has no effect; the ReadyBoost swap is a partial mirror of the page file, but from the point of view of the memory manager, it’s equivalent to swap space.

    The reason it mirrors part of the page file instead of supplementing it is that users are prone to yanking USB sticks at the drop of a hat.  If you paged to the ReadyBoost drive only, your program would suddenly lose access to all memory paged there.  The speed boost from ReadyBoost is from the advantages to short, random reads, writes are still constrained by the hard disk.

  10. someone else says:

    “Under Linux, you can ask for more memory than exists in both physical and swap memory.  The OS will happily commit to giving it to you, even though it can’t possibly service that request.  It isn’t till you actually try to write to that memory that the manager goes out to try to find where to store it.”

    That sounds like the totally logic thing to do … what are the Linux devs smoking??

  11. me says:

    My impression was that you could always circumvent the page file limitation (and not respect the user’s settings) by allocating memory in your own "page file":

    pData = MapViewOfFile(CreateFileMapping(CreateFile(tempfile, GENERIC_READ|GENERIC_WRITE|DELETE, FILE_SHARE_DELETE, 0, CREATE_NEW, FILE_ATTRIBUTE_TEMPORARY|FILE_FLAG_DELETE_ON_CLOSE, 0), 0, PAGE_READWRITE, 0, bytes, 0), FILE_MAP_ALL_ACCESS, 0, 0, 0);

    The memory is released by unmapping it and closing the two handles created.

  12. Alexandre Grigoriev says:

    I mentioned the stack commit failure because it introduces a bit of indeterministic behavior, almost as bad as overcommit: random exception in the low memory conditions.

    Though I believe there is now "stack commit failure imminent" exception. For this, an extra guard page is allocated.

    I think if you run POSIX application under the corresponding subsystem, you could run into overcommit if you do fork().

  13. Cooney says:

    "

    “Under Linux, you can ask for more memory than exists in both physical and swap memory.  The OS will happily commit to giving it to you, even though it can’t possibly service that request.  It isn’t till you actually try to write to that memory that the manager goes out to try to find where to store it.”

    That sounds like the totally logic thing to do … what are the Linux devs smoking??"

    Well, if you google around for 5 minutes, you find that a lot of apps allocate tons of memory and then don’t use it. It’s configurable, so you can turn it off or even tell the kernel how much overcommit is allowed.

  14. Well, if you google around for 5 minutes, you find that a lot of apps

    allocate tons of memory and then don’t use it. It’s configurable,

    so you can turn it off or even tell the kernel how much overcommit

    is allowed.

    Looks like a case of asking for a bug as a feature. If the default kernel settings allow processes to allocate chunks larger than available memory, programmers will certainly do it: it’s a lot simpler and easier to allocate, let’s say, 2 GB of memory and then use them as you need, than to allocate them in 1 MB blocks when you need them. The main reason not to allocate all the memory at once is to avoid your petition to get turned down (well, playing it fair with other processes is another good reason). IMHO, allowing overcommiting shouldn’t be allowed because of that, and thus, enabling it by default is a design bug.

  15. Cooney says:

    And a lot of apps predate linux or run on multiple unix OSes, so it’s possible they do it because they can get away with it most times.

    Another reason may well be ulimit: in unix, if you want to constrain a user’s resource usage, just run ulimit and their consumption of memory, processes, etc are limited. At this point, you go allocate 2G of ram as per common practice, then actually use malloc to get real memory – since you have VM, all this is easy and since you have ulimit in case something bad happens, no worries.

    Call it a different design philosophy.

  16. 640k says:

    > “I allocated half a gigabyte of memory but Task Manager’s ‘Physical Memory Free’ didn’t go down.”

    An allocation like:

    >> unsigned char *p = new unsigned char[512*1024*1024];

    doesn’t make the physical allocation before the application write to the allocated memory.

    [Is there an echo in here? -Raymond]
  17. Anonymous says:

    Alexandre Grigoriev

    there’s now "stack commit failure imminent" exception.

    For this, an extra guard page is allocated.

    Ah, so that’s how it is. We were wondering about that extra guard page for a while.

  18. Alexandre Grigoriev says:

    You have to support overcommit if you support fork(). A fork could essentially double the amount of process private committed memory. Unix and Linux cope with that by using copy on write strategy. When a page is modified, another page is allocated. If such allocation fails, bad luck.

  19. Random832 says:

    @Alexandre Grigoriev, you could still fail the fork if there’s not enough memory. Sucks if you were just going to exec() though…

    Maybe block the parent until it’s clear whether or not the child is going to exec.

    Or maybe just go ahead and really commit the memory, and anyone who doesn’t like it can use vfork or posix_spawn instead of fork/exec.

  20. Alex says:

    The general solution is that if you don’t have enough memory to run a program, buy more memory.

    Overcommit just lets you use memory that nobody else is actually using, even though they said they were going to.  Plus, if you start actually running out, tools like swapon give you more swap space.  Magic.

  21. waleri says:

    Am I correct to assume that if paging file is off, couple processes can prevent *other* process to commit memory?

  22. Billy O'Neal says:

    @Alex: Or just allow commits only to the value allowed by swap. No magic required. You can still use a swap file.

  23. Leo Davidson says:

    waleri wrote:

    "Am I correct to assume that if paging file is off, couple processes can prevent *other* process to commit memory?"

    Yes, they could. Of course, they can do that if the page file is on as well.

    If the phsyical RAM and page file reach their limits then you can’t allocate any more memory, whether or not the page file limit is zero.

    A process can also run out of available memory addresses (obviously much more of a problem for 32-bit processes than 64-bit).

  24. Ian Boyd says:

    I was going to ask the same question as bahbar. Thanks for the reply to his comment, Raymond. It’s nice to have an example where you can see the theory expressing itself.

  25. ulric says:

    "When you commit memory, you get a commitment to receive memory when you need it, but no sooner"

    ok.  but why does it matter?

  26. Gabe says:

    Raymond, yes, I was complaining about overcommitment. I would hope the whole point of doing it the way Windows does is so that you get null from malloc() instead of an exception in some other module that’s unlucky enough to be the first to try to write into that page that can’t be committed.

  27. Lawrence says:

    Cooney: "And a lot of apps predate linux or run on multiple unix OSes, so it’s possible they do it because they can get away with it most times."

    Umm, if you have an app that "predates" Linux (<~1991?), it really shouldn’t be allocating memory blocks big enough to scare any modern OS.

  28. someone else says:

    The more I think about it, the sillier the Linux design looks. Say you have some long-running calculation, that may require up to 2GB of memory. There is only 1GB left. On Windows, this fails outright. On Linux, this fails severel hours later (unless there’s some activity that frees up memory, and I doubt that on a system that does a calculation over several hours).

  29. Gabe says:

    The problem with only committing memory when you go to ask for it is that you don’t know you’re out of memory until it’s too late to do anything about it. You want to get a null returned from malloc(), not a segfault in the middle of some random transaction.

    Who would want a program that crashes and loses all their data instead of just giving them an error message?

    [A non-NULL return from malloc() comes with a commitment that the memory will be there when you access it. The point is that the memory is not required to be there *before* you access it. (Or are you complaining about overcommittment?) -Raymond]
  30. Random832 says:

    @Anon "You could easily throw fork out of the window in favour of a more sensible way to start a subprocess."

    That’s the problem – you can’t. "Unixy" OSes have just as bad of a legacy problem as windows; being open-source doesn’t change the fact that all the old APIs have to be supported at the source code level. But what they could do is require forked processes to use real memory, and new apps should use either vfork or posix_spawn. [fork/exec is a broken pattern anyway; most programs that use it leak file descriptors. That is why posix_spawn was created.]

    Or maybe even only "overcommit" memory that came from fork and is probably going to go away as soon as exec is done.

  31. Anonymous Coward says:

    I’m dabbling into Linux myself; just a little bit, but enough to know that most Linux *users* fall into two categories: those who don’t know what overcommit is, and those who hate it. Understandable, because there are few things as annoying as all your windows disappearing on you.

    Now, the developers say they want overcommit because they need fork, but I’ve always considered that a non-argument. You could easily throw fork out of the window in favour of a more sensible way to start a subprocess.

    By the way, I think it may be possible to get into an overcommit situation in Windows if you happen to write to lots of pages with copy-on-write access. I don’t know if it’s possible to prevent this situation under all circumstances (doesn’t the image loader use this?) and I wonder how much Windows does to guard you from this.

  32. Alex says:

    Also remember that even with overcommit on, you can touch all those pages before you do an hour of work to make sure there is enough ram.  Or set the overcommit flag off.

  33. Pavel Lebedinsky says:

    I think it may be possible to get into an

    overcommit situation in Windows if you

    happen to write to lots of pages with

    copy-on-write access.

    No, it’s not possible. When a process maps a copy-on-write view of a file or a pagefile section, Windows charges commit for the entire view.

  34. porter says:

    > Maybe block the parent until it’s clear whether or not the child is going to exec.

    Or use vfork() when are going to exec() or use fork() when you want the traditional behaviour.

    Hey, wait that’s what the functions already do.

  35. Petr Kadlec says:

    “The fire department promises that if there’s a fire in your house, they will come to try to put it out. But that doesn’t mean that there’s a tank of water at the fire station with your name on it.”

    And your bank promises that when you come back for your savings, they’ll give your money back… if not too many people come. Linux is not the only one to overcommit. ;-)

  36. Aaron G says:

    [I forget the exact situation but it was something like "I allocated half a gigabyte of memory but Task Manager’s ‘Physical Memory Free’ didn’t go down." -Raymond]

    Doesn’t the task manager have a "Commit Size" column?  If that was [similar to] the original question, then it sounds like whomever asked it must have been pretty lazy not to have found that option.

    Although I guess you’d already have to have a pretty lazy mind to not immediately see the difference between committing to doing some work and actually doing the work.

  37. Bill says:

    Having worked on Solaris boxes with overcommit disabled (or maybe just unavailable at all) I can say that I know what it is and I miss it when it isn’t there. That said it was in a situation with fork()ed processes requiring 4GB of physical ram when really about 800k would suffice with copy-on-write and overcommit.

  38. David Walker says:

    "Pages are virtual".  Not if it’s a page who works for the king, whose job it is to keep track of things.  See Note 1.

    The Thing King and the Paging Game

    Rules

    1.Each player gets several million things.

    2.Things are kept in crates that hold 4096 things each. Things in the same crate are called crate-mates.

    3.Crates are stored either in the workshop or the warehouses. The workshop is almost always too small to hold all the crates.

    4.There is only one workshop but there may be several warehouses. Everybody shares them.

    5.Each thing has its own thing number.

    6.What you do with a thing is to zark it. Everybody takes turns zarking.

    7.You can only zark your things, not anybody else’s.

    8.Things can only be zarked when they are in the workshop.

    9.Only the Thing King knows whether a thing is in the workshop or in a warehouse.

    10.The longer a thing goes without being zarked, the grubbier it is said to become.

    11.The way you get things is to ask the Thing King. He only gives out things by the crateful. This is to keep the royal overhead down.

    12.The way you zark a thing is to give its thing number. If you give the number of a thing that happens to be in a workshop it gets zarked right away. If it is in a warehouse, the Thing King packs the crate containing your thing back into the workshop. If there is no room in the workshop, he first finds the grubbiest crate in the workshop, whether it be yours or somebody else’s, and packs it off with all its crate-mates to a warehouse. In its place he puts the crate containing your thing. Your thing then gets zarked and you never know that it wasn’t in the workshop all along.

    13.Each player’s stock of things have the same numbers as everybody else’s. The Thing King always knows who owns what thing and whose turn it is, so you can’t ever accidentally zark somebody else’s thing even if it has the same thing number as one of yours.

    Notes

    1.Traditionally, the Thing King sits at a large, segmented table and is attended to by pages (the so-called “table pages”) whose job it is to help the king remember where all the things are and who they belong to.

    2.One consequence of Rule 13 is that everybody’s thing numbers will be similar from game to game, regardless of the number of players.

    3.The Thing King has a few things of his own, some of which move back and forth between workshop and warehouse just like anybody else’s, but some of which are just too heavy to move out of the workshop.

    4.With the given set of rules, oft-zarked things tend to get kept mostly in the workshop while little-zarked things stay mostly in a warehouse. This is efficient stock control.

    (Attributed to Jeff Barryman, 1972, reprinted in "Expert C Programming" by Peter van der Linden.)

  39. Sy says:

    So, the title says that when I “do commit”, I “get a commitment”?

    Sounds really strange. How’s that?

    The concept of “committing memory” is completely new to me.

    I always saw it as “asking for a commitment” from the part of the memory manager, not that I was “commiting” memory. It was the memory manager who was making that promise.

    I do not “commit” a portion of memory. How could I?

    [I’ll assume you’re not a native English speaker, because this sort of bidirectional transitivity is common in English. Committing memory means creating a commitment, and it is the kernel that provides the commitment that the application creates. -Raymond]
  40. Gabe says:

    Here’s a 5th note for David Walker’s allegory:

    1. Sometimes even the warehouses get full. The Thing King then has to start piling things on the dump out back. This makes the game slower because it takes a long time to get things off the dump when they are needed in the workshop. A forthcoming change in the rules will allow the Thing King to select the grubbiest things in the warehouses and send them to the dump in his spare time, thus keeping the warehouses from getting too full. This means that the most infrequently-zarked things will end up in the dump so the Thing King won’t have to get things from the dump so often.  This should speed up the game when there are a lot of players and the warehouses are getting full.
  41. David Walker says:

    Gabe, it’s not MY allegory.  I just remembered it, found it, and posted it here.  But yes, I do remember note 5 from way back.

  42. George Jettson says:

    Wow – have things really changed so little since 1972? No wonder I don’t have my jet pack and flying car yet.  Come on people, clearly we need some major innovating!

  43. mbghtri says:

    @George Jettson – "Wow – have things really changed so little since 1972? No wonder I don’t have my jet pack and flying car yet.  Come on people, clearly we need some major innovating!"

    Change? The workshop and warehouses are bigger by several orders of magnitude, the path from the warehouses to the workshop is now immensely wider, and some Thing Kings run many workshops at the same time. The dump doesn’t even have to be in the same city anymore, it can be in the clouds.

  44. LionsPhil says:

    The worst, <em>worst, <strong>worst</strong></em> part about Linux overcommitting is that there’s no guarantee as to which poor process(es) will get violently terminated once the kernel realises it’s got itself into a pickle. It’s down to <a href="http://lxr.linux.no/#linux+v2.6.31/mm/oom_kill.c">a filthy "badness" heuristic</a>.

    <code>fork()</code> is no argument. You can commit <em>and</em> have copy-on-write. Remember: the whole point of this post is that commits are not physical <em>allocations</em>; they’re book-keeping of promises.

    What this is is typical UNIX lazyness, and the empowering thereof.

Comments are closed.

Skip to main content