I got an email from someone using the contacts form asking:
There is an article on MSDN about using VirtualAlloc to reserve then commit memory pages. Here is the link: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/memory/base/reserving_and_committing_memory.asp
The article demonstrates a use of SEH to handle a page fault and then commit the appropriate page at runtime. My question: is SEH being used for performance reasons? I mean alternatively we could write a special allocator function that checks the allocation range and commits new pages when necessary without triggering a page fault. Of course such code would be run for every allocation, whether or not it actually required a new page to be committed.
Can you elaborate?
It’s actually a good question. It turns out that even though I’ve come down hard on the use of SEH as a mechanism to ensure reliability, there ARE a couple of places where SEH is not only a good idea, but it’s required.
IMHO, the techniques shown are valid, but IMHO they’re less likely to be used than this shows. However, if you were going to implement a sparse memory manager where you’d like to reserve a huge chunk of memory and commit the pages as needed, it might make sense.
And the function DOES show one of the three places where SEH is reasonable. They are:
- Memory Mapped Files
- Security Boundary Transitions
In all of these cases, it’s required that you use SEH. The first two because SEH is used to propogate out-of-band error information, the third because you cannot trust the contents of memory handed to you by someone on the other side of a security boundary.
For Memory Mapped files (and RPC), the system has to have a way of communicating error status to the caller. If you attempt to read from a memory mapped file and an error occurs when reading the file, there’s no way of “failing” a read – it’s just a MOV CPU instruction, and it has no failure semantics. As a result, the only way that the system can “fail” the operation is to abort the instruction with some form of access violation.
The only way to catch such an access violation is to use SEH to wrap the access to the memory, the “Reserving and Committing Memory” article shows how to do that, and shows some techniques to inspect the actual cause of the failure.
For RPC, they have a similar problem. RPC allows an application to define the full semantics of the function being remoted – there’s no way of communicating transmission failures to the application, so once again, the system needs to have a way of propogating out-of-band error information. That’s why RPC calls should always be wrapped with RpcTryExcept/RpcExcept/RpcEndExcept sequences.
The third case in which SEH is reasonable is when dealing with accessing data that is passed across security boundaries. When data is passed across a security boundary, you cannot EVER trust the caller, because that leads to security holes. There have been a number of security bugs in both Windows AND *nix caused by this problem. To resolve this, you need to copy all the data from the user into a kernel data structure. If you don’t, you’ll bluescreen the system (on Windows). The same thing holds true for high privileged services (and other security boundaries). The advantage that services have is that they live in another address space, so it’s less likely that their caller has direct access to their address space (it can happen if your service communicates using named shared memory though).
Bottom line: Don’t use SEH unless you’re in one of the three scenarios above. And even then, think long and hard about it.