Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Raymond’s post about FILE_SHARE_* bits reminded me of the story about why the bits are FILE_SHARE_READ in the first place.
MS-DOS had the very same file sharing semantics as NT does (ok, NT adds FILE_SHARE_DELETE, more on that later). But on MS-DOS, the file sharing semantics were optional – you had to load in the share.com utility to enable them. This was because on a single tasking operating system, there was only ever going to be one application running, so the sharing semantics were considered optional. Unless you were running a file server, in which case Microsoft strongly suggested that you should load the utility.
On MS-DOS, the sharing mode was controlled by the three “sharing mode” bits. They legal values for “sharing mode” were:
000 – Compatibility mode. Any process can open the file any number of times with this mode. It fails if the file’s opened in any other sharing mode.
001 – Deny All. Fails if the file has been opened in compatibility mode or for read or write access, even if by the current process
010 – Deny Write. Fails if the file has been opened in compatibility mode or for write access by any other process
011 – Deny Read – Fails if the file has been opened in compatibility mode or for read access by any other process.
100 – Deny None – Fails if the file has been opened in compatibility mode by any other process.
Coupled with the “sharing mode” bits is the four “access code” bits. There were only three values defined for them, Read, Write, and Both (Read/Write).
The original designers of the Win32 API set (in particular, the designer of the I/O subsystem) took one look at these permissions and threw up his hands in disgust. In his opinion, there are two huge problems with these definitions:
1) Because the sharing bits are defined as negatives, it’s extremely hard to understand what’s going to be allowed or denied. If you open a file for write access in deny read mode, what happens? What about deny write mode – Does it allow reading or not?
2) Because the default is “compatibility” mode, it means that by default most applications can’t ensure the integrity of their data. Instead of your data being secure by default, you need to take special actions to guarantee that nobody else messes with the data.
So the I/O subsystem designer proposed that we invert the semantics of the sharing mode bits. Instead of the sharing rights denying access, they GRANT access. Instead of the default access mask being to allow access, the default is to deny access. An application needs to explicitly decide that it wants to let others see its data while it’s manipulating the data.
This inversion neatly solves a huge set of problems that existed while running multiple MS-DOS applications – if one application was running; another application could corrupt the data underneath the first application.
We can easily explain FILE_SHARE_READ and FILE_SHARE_WRITE as being cleaner and safer versions of the DOS sharing functionality. But what about FILE_SHARE_DELETE? Where on earth did that access right come from? Well, it was added for Posix compatibility. Under the Posix subsystem, like on *nix, a file can be unlinked when it’s still opened. In addition, when you rename a file on NT, the rename operation opens the source file for delete access (a rename operation, after all is a creation of a new file in the target directory and a deletion of the source file).
But DOS applications don’t expect that files can be deleted (or renamed) out from under them, so we needed to have a mechanism in place to prevent the system from deleting (or renaming) files if the application cares about them. So that’s where the FILE_SHARE_DELETE access right comes from – it’s a flag that says to the system “It’s ok for someone else to rename this file while it’s running”.
The NT loader takes advantage of this – when it opens DLL’s or programs for execution, it specifies FILE_SHARE_DELETE. That means that you can rename the executable of a currently running application (or DLL). This can come in handy when you want to drop in a new copy of a DLL that’s being used by a running application. I do this all the time when working on winmm.dll. Sine winmm.dll’s used by lots of processes in the system, including some that can’t be stopped, I can’t stop all the processes that reference the DLL, so instead, when I need to test a new copy of winmm, I rename winmm.dll to winmm.old, copy in a new copy of winmm.dll and reboot the machine.
Anonymous
May 13, 2004
Why do you have to reboot? Can't you just reopen the application that's using the dll, or restart the service that's using it?Anonymous
May 13, 2004
A good question. Because NT's really smart about DLL's. When the loader maps a DLL into memory, memory management first looks to see if the pages for that DLL are already loaded in physical memory somewhere else.
If they are, it doesn't go to disk to get the pages, it just remaps the pages from the existing file into the new process.
Now there are a bunch of caveats about this mechanism. For instance, when the pages for the DLL are mapped from the existing process to the new process, the pages need to be mapped into the same virtual address in both processes (otherwise absolute jumps to code in the DLL wouldn't work).
This is why it's so important to rebase your DLL's - it guarantees that the pages in your DLL will be shared across processes. Which reduces the time needed to load your process, and means your process working set is smaller.Anonymous
May 13, 2004
But can't it notice it's a different dll (since you changed the underlying file)? And so have two different copies of the dll loaded, the old one in the old processes and the new one in the new processes?Anonymous
May 13, 2004
The comment has been removedAnonymous
May 13, 2004
I think the "just stop the app" approach is something not so amusing during such development. Not that I am any kind of expert on this, but couldn't you put some sort of "state flag" into the DLL that would indicate that if "true" the DLL's in-memory code is going to be swapped for new code "somehow". Then just run some hacky hax code which loads thenew.dll into the spaces where the old.dll is already loaded.Anonymous
May 13, 2004
Be aware of the access deny problem of FILE_SHARE_DELETE
http://blogs.msdn.com/junfeng/archive/2004/04/09/110278.aspx
This is really really annoying.Anonymous
May 14, 2004
Actually I talked to the MM guys about this while writing up todays blog entry. It turns out that in general, the system works as Cesar indicated it should. But winmm is special. Because it's mapped into winlogon, which is a "special" process to the system, some of the normal rules that are applied to DLLs don't apply to it. And that's the real reason I need to reboot the system.Anonymous
May 14, 2004
Or, perhaps, have the process reference the copy of the DLL on disk by some internal 'node' pointer, and have the 'delete' command just unlink the 'node' from the directory structure. (The space on disk referenced by the 'node' will be freed with the reference count of process that have it open drops to zero).
Then, when you create a new file with the same name as the old one, assign it a new 'node' and link it into the directory. No trickery, and you don't have to keep rebooting your machine to install programs/develop dll's. Surely would make for a much more stable/multi-user friendly system, no?
These are well-known and tested concepts, and I'm not sure what benefit there is to anyone by forcing the unnecessary reboot cycles on even developers. It's just frustrating.Anonymous
May 14, 2004
Mr Xinu:
Nice idea... but in the case of winmm.dll - as Larry has said above - the reference count will never drop to zero unless you reboot.Anonymous
May 14, 2004
It certainly won't solve every problem; some things will certainly require a reboot, no matter what - but at least you can restart the processes you really need to pick up the dll right away, and leave the actually reboot for some non-critical time.
There's unquestionably some benefit to this method, though we can argue about the size of it. What harm would there be to do it this way?Anonymous
May 14, 2004
Mr. Xinu - First off, read my comment above (it came in while you were writing your comment above). It turns out that my issue (requiring a reboot) is directly related to the fact that winmm is listed as a known DLL on my Longhorn system.
In general, if you rename a running DLL, copy in a new version and restart the app, you'll get the new version.
Todays blog entry (currently being reviewed for accuracy - I've learned my lesson) goes into the DLL loading process in more detail.Anonymous
May 14, 2004
Well this is one of those things that needs to be looked at isn't it? Its one of the reasons you can't apply most patches without rebooting the system.
I hope some work is put in to try and solve some of this in Longhorn, especially if it can be automatically updating itself at any time of the day or night.Anonymous
May 14, 2004
Edward: Of course it's one of those things that's being looked at. The goal for Longhorn is that no patch should require a system reboot.
In fact, XP SP2 has a bunch of changes in it to reduce the number of reboots needed when a patch is required (my one code contribution to XP SP2 was one of them)
But patches without reboots are a goal, not a requirement (AFAIK - I may be wrong, and they may ARE a requirement). There may very well be situations that do require a reboot (it'd be hard to change ntoskrnl.exe without rebooting the system, for example). But I have confidence in the people handling this to know that Longhorn will require significantly fewer reboots than any previous version.Anonymous
May 14, 2004
How does SHARE_DELETE actually work on NTFS? On FAT? For Unix the unlink removes the reference to the INode but the actual data is still linked to an INode that has a non-zero reference count.Anonymous
May 14, 2004
Sharing semantics are all done in-memory, so the filesystem involved doesn't matter.
It's pretty simple - if someone attempts to open the file for delete access, the system checks to make sure that everyone who has he file opened has it opened for FILE_SHARE_DELETE access.
If they do, then the open is granted, if they don't, it's denied.
As far as the filesystem semantics, once you delete the file on NT, the file's contents can't be modified.Anonymous
May 14, 2004
The NT file deletion semantics are a little... weird. There is nothing like unlink (yet).
Instead the basic semantics are that you open a handle to a file and then you can mark the handle as "delete on close" (assuming the handle is open for delete access). Once you've done this, when the last handle to the file object (and file objects are single instanced under the hood - there's a kernel object backing the handle that understands things like current position in the file but I always get the names mixed up), the file is deleted.
Until that time, no new handles may be opened to the same file, but existing handles remain valid.
But wait. If you have an existing handle to the file with delete access, you can turn the delete on close bit back off.
Funky funky stuff.Anonymous
November 26, 2006
PingBack from http://test.xxeo.com/archives/2006/05/26/posix-under-windows.htmlAnonymous
June 24, 2007
PingBack from http://www.xxeo.com/archives/2006/05/26/posix-under-windows.htmlAnonymous
June 09, 2009
PingBack from http://insomniacuresite.info/story.php?id=8147