How can I get the memory manager to prefetch bigger chunks of data from my memory-mapped file?


A customer had a memory-mapped file and they found that when they accessed a page in the mapping, Windows paged in 32KB of memory, even though the documentation says that only 4KB gets paged in. The customer's application reads 100 small records from a memory-mapped file on an SSD, so latency is the most important factor. They were hoping for a way to get the prefetcher to prefetch bigger chunks of the memory-mapped file.

Okay, let's take things one issue at a time.

Why are they observing 32KB reads when the documentation says 4KB? The operating system's contractual obligation is to bring in the entire page, which is 4KB on x86-class machines. However, the operating system is allowed to perform speculative reads, and Windows will read up to 32KB of memory around the faulting page. The precise amount depends on a variety of factors, including how the memory was mapped, which pages are already present, and (for pagefile-backed memory), whether the pages are contiguous in the pagefile.¹

What the customer can do is call Prefetch­Virtual­Memory to initiate explicit prefetching.

The customer wrote back that with the explicit call to Prefetch­Virtual­Memory, the I/O system sends all the requests to the device at once, "which seems to be exactly what we need."

¹The maximum automatic prefetch for pagefile-backed memory is 64KB, but this increase is not as big a deal as it sounds, because in practice, consecutive addresses in memory tend not to be assigned to consecutive pages in the pagefile, so the speculative read from the pagefile tends not to read very much.

Comments (6)
  1. DWalker07 says:

    The article title doesn’t seem to match the first paragraph.

    The customer wanted bigger chunks read in? Yet they noticed that 32KB is being read when the doc says 4KB will be read?

    32KB is bigger than 4KB. Unless they are wanting chunks BIGGER than 32KB. In the third sentence, the antecendent of “bigger” is not clear….

    1. They wanted even bigger than 32KB. Math says that they wanted 100 x 4KB = 400KB.

      1. DWalker07 says:

        I didn’t know they wanted 100 4K records; all I knew was that they wanted 100 small records. Small could have meant ten bytes for all I knew!

  2. alegr1 says:

    The customer didn’t know what they’re doing.
    If you want your data in memory, just allocate the 400K and read data into it.
    If you want to write the modified data back, just write those 400K back.

    1. Ben Voigt (Visual Studio and Development Technologies MVP with C++ focus) says:

      Memory-mapped I/O is much more useful than you suggest. Among other things, the coherence between multiple views on the same machine makes it possible to do things that explicit I/O never can. For another, memory usage is reduced because you don’t have two copies of the same page, one in the disk cache, one in your application.
      These advantages are balanced by the drawback that torn updates and intermediate states are visible, and maybe even persistent if your process crashes. (With synchronous I/O, partial writes are only a concern if the whole OS goes down, not just your app).

      1. alegr1 says:

        From the description, the application was just reading the file. And 400 K is not the size to worry about memory usage.

Comments are closed.

Skip to main content