A fine detail on how DLLs are relocated as the result of a base address collision, and consequences


If a DLL must be relocated due to a base address conflict, then the image will be relocated, and the entire relocated DLL is now backed by the page file.

If you read the description more carefully, you'll see that it's not exactly the entire relocated DLL that gets backed by the page file. More precisely, all the pages that contained fixups are put into the page file. If you're lucky and have a page without any fixups, then that page will still be demand-paged from the image because the kernel didn't apply any fixups to it, and therefore did not incur a copy-on-write for that page, so it continues to be backed by the file system image.

One of the arguments I've seen for intentionally causing a base address collision is so that the relocated DLL gets copied into the page file, which is a win if the page file is on a faster medium than the DLL. For example, the page file may be on an SSD or (gasp) a RAM drive.

That logic fails to take into account the case of pages with no fixups. Those pages will still page in directly from the original file, which be a problem if the original file is on a very slow medium, or a medium which could be lost, such as a CD-ROM drive or network drive.

Fortunately, you don't need to play funny games with base address conflicts to get your entire DLL loaded into the page file. Instead, use the /SWAPRUN linker flag which lets you specify in the module header that the loader should copy the image into the swap file.

Comments (22)
  1. Mason Wheeler says:

    For example, the page file may be on an SSD or (gasp) a RAM drive.

    I’m not gasping here, just puzzled. How does it make any sense to put the page file–the place where you set data aside for safekeeping when you run out of RAM–in RAM?

    1. I actually contemplated this years ago… my desktop had plenty of RAM… but one app (won’t name due to policy) would use the page file for various stuff… I tried disabling the page file entirely, which worked well for any other app… since the app didn’t consume all of the system’s memory, I’d contemplated using RAMDISK to create a place to put the page file… all the performance of RAM, none of the app issues.

      I think I ended up instead limiting the page file to like 1mb instead.

      1. GWO says:

        I’m amazed the OS let apps access the pagefile, either directly or via an API. What goes into the Pagefile is the kernel’s business, and apps should limit their involvement to advising the kernel.

        1. Ben says:

          Apps can “use” the page file by simply allocating memory, there is no mystery to it.

          More to the point, they can create a file mapping object which is backed by the page file for use as shared memory between processes. This is the sensible thing to do if the data does not need to persist when the application is not running. (See CreateFileMapping first parameter).`

          1. GWO says:

            Well, sure, but that’s just “accessing the pagefile” in the sense of “use virtual memory and let the kernel”. From an application standpoint an anonymous mapped file exists in virtual memory – whether its backed by physical RAM or pagefile should be an irrelevance. Certainly, there’s no reason why CreateFileMapping() should fail if there is plentiful free RAM but no pagefile.

          2. GWO says:

            See also the following article by Raymond Chen, in 2013 https://blogs.msdn.microsoft.com/oldnewthing/20130301-00/?p=5093

    2. Joshua says:

      Ramdrivrs can use out-of-range memory on 32 bit Windows.

      1. xcomcmdr says:

        How ?

        1. ender says:

          By using something like ImDisk+awealloc from http://www.ltr-data.se/opencode.html/ ?

        2. Yuhong Bao says:

          I think many of them used to use PSE36.

    3. Simon Farnsworth says:

      In addition to the other answers, consider devices like http://www.all1.com.tw/en/CDD101Storage%20Turbo.html – they’re RAM disks, technically, but accessed over PCIe like an NVMe or SATA device. There have also been SATA RAM disks on the same principle.

  2. Yukkuri says:

    For performance I would put the null device on a WOM SSD

  3. Myria says:

    I wish there were a /swaprun option that worked regardless of whether the image is located on the network or a removable drive / CD.

    1. kantos says:

      You might think you do, but you don’t. Windows can’t protect users from bad hardware setups. Giving developers an option to always do a /SWAPRUN would end up in people doing it because “faster” without realizing that all they are doing is copying from disk to the same disk in the vast majority of cases. Thus slowing the system to a crawl a it thrashes the disk madly. The intent of the flag was to say: “Don’t try running this from a potentially super slow or intermittent source”, it was not to say “I want my program to run in fast mode” mostly because there is no such thing as “Fast mode”. Recall that if the binary is already local then it will simply be mapped into memory as is, that should be fast enough in the vast majority of cases. In the cases it’s not I would say there were other much more serious issues going on.

      1. Joshua says:

        I want it because it would unlock the .EXE or .DLL after loading so it can be deleted while running.

        1. Darran Rowe says:

          While you may not be able to delete it while it is running, you can certainly move it while it is running. You can thus create a temporary directory, move the running executable into it, then use MoveFileEx to delete it (move it to a NULL file name) and delay until reboot. You can also delete the temporary directory this way if you want too.
          There are other ways of doing this too, one of the simpler being using the Windows delete on close thing. So deleting a running executable is possible on Windows, it just means more work than what you would have to do on Linux.

          1. skSdnW says:

            DeleteOnClose does not work on running .exe files because Windows does not open the file with FILE_SHARE_DELETE. Only Admins can use MOVEFILE_DELAY_UNTIL_REBOOT. There is a reason why people are using ugly hacks like rundll32 or batchfiles to self-delete…

          2. Darran Rowe says:

            @skSdnW:
            Then if that is how it is then so be it.
            The batch file method itself is nice, short and easy to do. So I don’t see why it is an ugly hack, and why people make such a fuss if at least one easy way of doing this exists.

        2. kantos says:

          It would seem to me that a separate ExitProcessAndDeleteExecutable that requires elevated privileges may be a better choice here. But my main concern is that such a method could be easily used by malware; create a thread in Explorer and call that function etc. on the vast majority of systems that wouldn’t even require elevation because they don’t really have UAC on.

          1. Joshua says:

            Why would it need escalation? It would of course only delete files you can delete anyway.

  4. Joshua says:

    The RAM driver knows about the situation and claims all memory above the 4GB barrier not already mapped to IO.

  5. Patrick Van Cauteren says:

    I used the /SWAPRUN option in the past to make sure that when the executable is loaded from a network drive, the application doesn’t unexpectedly crashes with an inpage-IO-error if the network connection drops. However, since Windows Vista, the /SWAPRUN option is also used by Windows Explorer to show additional executable information. The result is that if the /SWAPRUN flag is enabled in the exe, Windows Explorer load the complete (multi-megabyte) executable over the network, just to show an icon of a few KB. So in practice the /SWAPRUN option is unusable.

Comments are closed.

Skip to main content