It’s the hardware, stupid!

Riffing on Raymond, once again 🙂

Raymond’s post today reminded me of an email message sent out (company wide) by one of the very senior developers on the Windows 1.0 team about 6 months before they shipped.

In his email, the developer announced that he wasn’t going to be looking into any more crashes in Windows because they were all caused by bad hardware.

It turns out that he actually almost had a point.  This particular email went out after he’d spent several weeks debugging a series of problems on a number of people’s machines (dozens).  In every case, the problem was caused by bad 3rd party RAM.

You see, back in those days, machines typically came with 512K of RAM, to get the extra 128K of RAM to get to 640K (or more), people bought 3rd party memory extenders.  And often, those 3rd party memory extenders used sub-standard memory chips (either chips without parity, or chips that were just flat-out bad).  And this developer happened to run across a large number of busted cards.

So sometimes, it IS the hardware.


Comments (5)

  1. mschaef says:

    What kind of e-mail did the Windows 1.0 team use? I’m imagining a DOS terminal program (the original MS Access, perhaps?) and a serial link to a PDP-something running e-mail software.

    "So sometimes, it IS the hardware."

    Yeah, just ask an embedded software developer. 🙂

  2. Anonymous says:

    It is all too easy to forget that a computer system is a *system*, and that the many parts of it – hardware and software – are interdependent.

    That having been said, it has to be added that bad memory cards – and bad individual chips – were a widespread problem in the XT/AT days, one which really wasn’t solved until the advent of SIMMs. Even today bad memory is more common than is usually realized, especially since it often is intermittent and can have symptoms that seem unrelated to memory. Video memory problems can be particularly hard to track down.

    Regarding Michael Schaeffer’s comment, I’ve been given to understand that most MS development prior to about 1989 was done in a XENIX environment, largely because it supported e-mail, FTP, source control, etc. better than the MS-DOS software of the time. XENIX was still a large part of Microsoft’s long-term business strategy at the time.

  3. J Osako’s right: MS development (at least OS development) in those days was done on Xenix machines.

    Most people had an H19 terminal in their office that they used for email etc, and a serial line to download from the mainframes.

  4. Anonymous says:

    There was a time when Microsoft Basic compiler development was done on a VAX. At around that time DEC had to replace a bunch of defective third-party memory modules that were installed by value subtracted resellers because VAXes had DEC’s name on them.

    I do wish that parity (or maybe better ECC such as IBM had 35 years ago) would still be commonly used in RAM as it is just about everywhere else in PCs. Then we’d be able to infer with a rather high accuracy whether a problem was really in RAM or in software.

  5. Anonymous says:

    Seems like it’s not much better today…

    "Why untested DRAMs are getting into more and more products", 4/18/05:

Skip to main content