The wired network in my building’s being unusually flakey so I’m posting this from my laptop, sorry for the brevety..
I’ve not finished reading it (the site’s heavily slashdotted), but his first paragraph got me worried:
Back in the ‘good old days’ of command prompts and 1.2MB floppy disks, programs needed very little RAM to run because the main (and almost universal) operating system was Microsoft DOS and its memory footprint was small. That was truly fortunate because RAM at that time was horrendously expensive. Although it may seem ludicrous, 4MB of RAM was considered then to be an incredible amount of memory.
4MB of RAM? Back in the “good old days” of 1.2MB floppy disks (those were the 5 1/4″ floppy drives in the PC/AT) the most RAM that could be addressed by a DOS based computer was 1M. If you got to run Xenix-286, you got a whopping 16M of physical address space.
I was fuming by the time I’d gotten to the first sentence paragraph of the first section:
Whenever the operating system has enough memory, it doesn’t usually use virtual memory. But if it runs out of memory, the operating system will page out the least recently used data in the memory to the swapfile in the hard disk. This frees up some memory for your applications. The operating system will continuously do this as more and more data is loaded into the RAM.
This is SO wrong on so many levels. It might have been be true for an old (OS8ish) Mac, but it’s not been true for any version of Windows since Windows 95. And even for Windows 1.0, the memory manager didn’t operate in that manner (it it was a memory manager but it didn’t use virtual memory (it was always enabled and active swapping data in and out of memory, but the memory manager didn’t use the hardware (since there wasn’t any hardware memory management for Windows 1.0))).
It REALLY disturbs me when articles like this get distributed. Because it shows that the author fundimentally didn’t understand what he’s writing about (sort-of like what happens when I write about open source 🙂 – at least nobody’s ever quoted me as an authority on that particular subject)
Edit: I’m finally at home, and I’ve had a chance to read the full article. I’ve not changed my overall opinion of the article, as a primer on memory management, it’s utterly pathetic (and dangerously incorrect). Having said that, the recommendations for improving the performance of your paging file are roughly the same as I’d come up with if I was writing the article. Most importantly, he differentiates between the difference between having a paging file on a partition and on a separate drive, and he adds some important information on P-ATA and RAID drive performance characteristics that I wouldn’t have included if I was writing the article. So if you can make it past the first 10 or so pages, the article’s not that bad.