Why do we all use Wintel machines these days?

Barry Dorrans made a comment on Monday’s blog post that reminded me of the old IBM PC technical reference manual.

In my opinion, this document is the only reason that we’re all using wintel computers these days (as opposed to Apple MacIntoshs or Commadore Amiga’s).

You see, when IBM first developed the IBM PC, they entrusted the project to a visionary named Don Estridge.  Don’s vision was to produce a platform whose design was closed but whose architecture was totally open.  When IBM first shipped the PC, they also made available a reference manual for the PC.  This reference manual included EVERYTHING about the PC’s hardware.  The pin-outs on the cards.  The source code to the System ROMs.  And most importantly, they even included the schematics of the original PC.

They continued this tradition throughout the original IBM PC line – for every major revision of the original PC line, there was a technical reference manual that accompanied the product.  The XT, AT and network cards all got their own technical reference manuals.

This was an EXTRAORDINARY admission.  For most of the other PC manufacturers, their schematics and ROM source code were tightly held secrets.  They didn’t want people designing hardware for their platforms or messing with their system ROMs, because then 3rd parties could produce replacement parts for their PCs and undercut their hardware business.  For instance, the original Mac didn’t even have an expansion ability – you could plug a keyboard, a mouse and a power cord into it and that was about it.

For whatever reason, Don Estridge decided that IBM should have a more open policy, and so he published EVERYTHING about the IBM PC.  The ROM sources were copyrighted, but other than that, everything was fully documented – Everything, from the pin-outs and timing diagrams on the parallel interface, to the chip specifications of the various processors used on the motherboard.  As a result, a thriving 3rd party hardware market ensued providing a diverse hardware platform far beyond what was available on other platforms.  In addition, they licensed MS-DOS and published full documentation for it as well.  When I was writing the BIOS for MS-DOS 4.0, I had a copy of the Intel components data catalog and a ream of chip spec sheets on my desk at all times so I could look up the detailed specifications for the system.  I used the timing diagrams in the system to debug a bunch of problems with the printer drivers, for example (there was a bug in the printer hardware on the original IBM PC that prevented using the printer interrupt to allow interrupt driven printing – IIRC, the INTR line was raised before the “data ready” line was raised, which meant that the printer interrupt would be generated before the printer was actually ready to accept the next byte of data – they later fixed this on the PC/AT machines).

As a result, a confluence of documented hardware and software platforms existed which allowed software developers to take full advantage of the hardware platform, and the IBM PC platform grew and flourished.  When IBM didn’t provide graphics support for their monochrome monitors, then an OEM, Hercules stepped up and provided it.  When IBM/Microsoft didn’t provide spreadsheet support, then an ISV, Lotus stepped up and provided it.

But it was the synergy of open hardware and open software that made all the magic come together.  None of the other PC manufacturers provided that level of openness at the time.

This openness wasn’t always to IBM’s advantage – it also allowed OEM’s like Compaq to clone the IBM hardware and produce their own interoperable IBM clone machines, but it did allow the platform to thrive and succeed.

In my honest opinion, THIS is the reason that the IBM PC architecture (ISA, later called Wintel) succeeded.  It was because IBM and Microsoft let anyone produce products for their platform and NOT because of any marketing genius on IBM’s (or Microsoft’s) part.


Comments (29)

  1. Anonymous says:

    Ironic that IBM is now pretty much out of the desktop PC business. Next up for consumer-space commoditization will be the OS and middleware, one hopes.

  2. Anonymous says:

    I purposely edited out a bunch of comments about the second generation and later IBM PC hardware (remember the MCA). And that’s all I’ll say on the subject, except to say that I think that the MCA is the single reason that IBM is no longer the dominant force in the desktop platform.

    Open hardware standards (ok, it costs $3000 to be a member of the PCISIG and get access to the PCI specifications) and open software standards (document EVERYTHING about your platform, like Microsoft does) will win over closed solutions every time.

  3. Anonymous says:

    Bit of a bummer that they screwed up the timing on SBHE on the AT if you wanted to decode anything smaller than 128K, made doing 16 bit cards that decoded into the C000-F000 space a bit hit and miss. (Scary thought I can remember all about this from over 15 years ago, I must be badly damaged)

    I have my own thoughts on MCA but it reminded me of a ‘286 bus brought out pretty raw.

    I always thought IBM copied the openess from the Apple][ where again the BIOS and schematics were available. Does anyone know for sure if that was an influence?

  4. Anonymous says:

    Hey, I sparked something 🙂

    It does take me back. That was my first job, System/38 tape monkey and technical support person for a bunch of PCs. An original XT on my desk.

    And the day IBM came to visit with a PS/2, with a PC Support card so we could connect it up to the mainframe over Token Ring, and there it was, OS/2 with Micosoft manuals.

    But Larry, come on, Microsoft may document everything about your platform, but do we ever get to see it? 🙂

  5. Anonymous says:

    What’s not documented about the Windows platform?

    Be specific. Show an example of something that Microsoft’s applications can do that’s not documented. Or something that our Middleware applications can do (DirectX, media player, messenger) that’s not documented.

    Jeremy Allison’s issues with the domain controller replication algorithms notwithstanding (and his are protocol documentation issues), as far as I know, EVERYTHING is documented.

    There are internal interfaces that aren’t documented for various reasons (I’m working on some of them for Audio Policy in Longhorn), but there’s absolutely nothing that any Microsoft or Microsoft Middleware application can do on Windows that’s not documented.

  6. Anonymous says:

    Audio Policy? Secure Path stuff? Ok that’s not documented for a reason. I was thinking more along the lines of the fabled full specifications of the Office file formats.

  7. Anonymous says:

    Umm. Office file formats aren’t a part of the windows platform. I can’t speak to what they do.

    But Windows is documented. Audio Policy actually will be documented, and I’ll write about it once we get more stuff nailed down, but the direct low level interfaces involved won’t be – audio policy’s internals will only be documented indirectly – basically to the extent that a developer would need to be able to take advantage of the feature.

  8. Anonymous says:

    You are opening up a miserable Pandoras Box with this topic. IBM may have started out with the best intentions (being just about the only manufacturer), but they sure did loose their way and make a terrible mess out of it in the end thru utter stupidity and arrogance. Dare I mention the PowerPC, and even worse, WorkplaceOS? Or, on the software end, abominations like the Lotus "Dumb"Suite or even the current Notes?

    Or OS/2, which had such an absurdly terrible quality rep that it was basically unusable in any type of corporate environment (v3 of which had 27 service packs, each consisting of over 20 diskettes apeice, and most of them un-doing the others). Nobody would stand for that in today’s environment.

    I lived that life every day – and just barely came out of it in one peice. I had a truly miserable time with all that junk. I believe that we are extremely fortunate nowadays to have Intel and AMD, with Microsoft operating systems and associated products. It may be easy to criticize them for their occasional misstep, but they are doing a heckuva fine job and I’m proud to use their products.

    And, before flaming, kindly remember that the point here is to build business applications and roll them out to our end-users to support our companys. The point is *not* to argue over who has the coolest APIs or who has the greatest instance of what I call "software radicalism". It’s all about the business.

    As for this documentation issue, I believe it’s also BS. It’s no coincidence that those whom scream the loudest (like "un"Real Player) also did the lousiest job writing their products. "Secret" APIs won’t help them – bearing down and doing a quality job of software design & development & testing will.

  9. Anonymous says:

    Btw, I do want to be clear – there WERE undocumented pieces of the system that were used by (among others) DirectX that we did have to document (DirectX used a private Mixer message to determine the PnP device identifier associated with a given mixer device IIRC).

    But even that interface is documented these days.

  10. Anonymous says:

    I actually think that success of .NET is at least in part due to blogs and sscli.

    A lot more of info available than for other complex systems.

  11. Anonymous says:

    Ok, I’ll give you a short list:

    386 functions in SHLWAPI (arguably part of IE and not relevant to app developers)

    18 functions in WinInet (arguably these functions are also only of use to IE)

    All of the NT native API (yes you are probably insane to program in this, but there was an example recently about getting the name of the file that a file handle refers to that can only be done with the NT native API)

    SystemFunction* in ADVAPI32 (even obfuscated the names there)

    93 functions in ComCtl32 (ok, ~10 of these are for a precursor to unicows)

    A ton of stuff in Shell32

    Many remotely accessable RPC interfaces, such as winreg, svcctl and samr.

    I wonder what percentage of these could be justified as being undocumented because they are meant to be only used by OS components (not including the remote stuff) and could change at any time?

  12. Anonymous says:

    You can’t get the name of a file from the NT native API. At least not for many many files. We had a discussion on this in an internal alias just the other day.

    Any APIs in SHLWAPI or WININET that are used by IE are documented, that’s one of the things that the consent decree mandated. And believe me, we’re not going to mess that one up.

    Any of the other ones are used for internal communication between explorer.exe and shlwapi.

    Just because a DLL exposes an interface (or there’s an RPC endpoint available) doesn’t mean that it’s an API. And it doesn’t mean it’s a good idea to call it.

    The RPC interfaces on winreg/svcctl ARE documented, they’re just the service APIs and the registry APIs – the RPC interfaces are how those APIs are remoted to other machines. SamR’s another story, but I believe that the SAM APIs ARE documented (I can’t find it in a quick google but…)

    I’ll give an example of this that is close to my heart: One of the APIs we documented as a part of the consent decree was the API that’s used by DirectX that I mentioned above that gets the PnP device identifier of a mixer device. Well, in Longhorn, we’re not using PnP to identify audio devices – we’re introducing a paradigm shift that moves the audio engine away from PnP. Well, this internal API that we just documented is likely to stop working, because the new paradigm doesn’t map to PnP at all.

    If we had intended for the API in question to be documented, we’d have been precluded from changing our internal paradigm. Right now I’m in discussion with lawyers to understand exactly how much support is required for this API. We’re currently trying to figure out if we need to come up with some form of compatibility shim to keep this API working or if we can let it die the graceful death it deserves. This is NOT a pleasant process.

    There are also licensing reasons that things aren’t documented. For example, the code that lets the explorer access Zip files is licensed from another vendor and we’re not allowed to disclose how that works. For many years, we were constrained by US export laws from exposing the encryption functions that are contained inside some of the system DLLs.

    And there are still other things that aren’t exposed because application authors can abuse them. For example, see: http://weblogs.asp.net/oldnewthing/archive/2003/09/03/54760.aspx

  13. Anonymous says:

    There are some things which are documented but the documentation is only available if you license the appropriate protocols. See http://members.microsoft.com/consent/Info/default.aspx. Jeremy Allison’s problem (if you’re talking about the Samba guy) is that the license agreement for the protocols is incompatible with the GPL under which Samba is licensed. Basically, Microsoft want money for every released copy. You can argue the rights and wrongs of this, but fundamentally it’s Microsoft’s intellectual property.

    So Samba’s only recourse is to follow the same course Microsoft did when writing file importers for their competitors applications: reverse-engineer the protocol.

  14. Anonymous says:

    Mike, you are absolutely true. But these are for protocols, not APIs. 99.99999% of the people out there don’t need to know the protocols, but they DO need to know the APIs.

    Real and other multimedia player application authors made a credible case that Microsoft was taking competitive advantage over them by not disclosing all these APIs which is why they’re documented.

    I’m unhappy that it took an antitrust lawsuit to make this happen btw. I truely believe that APIs were meant to be open.

    I also remember fondly the days when we used to work closely with Jeremy to make samba interoperate with our products – he’s the guy who discovered the the Lanman authentication protocol didn’t use DES, but instead used "DES-with-a-typo".

  15. Anonymous says:

    Ok, I understand the sentiment that you don’t want to document every function call out there because it will mean that you can’t change it (and for legal reasons as you cited).

    However, there are many interfaces that won’t be changing soon (such as the RPC interfaces) because there is interoperability issues between other Windows versions. I’m sure InstallShield wouldn’t be too happy to learn that MSI uses several NT native API calls, even though it is supposedly becoming an OS component.

    I even have an exception to the rule that every API used by IE is documented. It uses a supposedly reserved field in GetUrlCacheEntryInfoEx to verify that an entry is of a specific type. IE also uses several undocumented toolbar messages. Having said that, these extra "features" used by IE are probably of little interest to Windows application developers so you are probably off the hook there 😉

    You could quite easily document the RPC interfaces that have been around since the early NT days, as they cannot be changed without breaking compatibility with clients from past Windows versions (e.g. regedit). I agree that you can get a fair clue of the RPC interface from looking at the corresponding Win32 API functions, but the data on the wire isn’t always exactly what is passed into a Win32 function. The nearest I can find to these being documented is from the Samba source 🙂

  16. Anonymous says:

    As far as I know (and I was directly involved in the development of the service APIs and indirectly in the registry APIs), the RPC wrappers are EXACTLY the same as the local APIs.

    Originally in NT 3.1, the registry APIs were all hosted in the screg.exe process (that’s what it was for – it hosted the service controller and registry). The Win32 registry APIs were a paper thin wrapper around these RPC interfaces. Before we shipped NT 3.1, there was a huge performance problem associated with the registry APIs so we removed the RPC wrapper from the APIs and directly called the kernel interfaces. The RPC interface was left in the product for remote registry access.

    Just for grins, I looked at the service controller IDL file. There are absolutely no functions that are available through the RPC interfaces that aren’t available through the service APIs.

    Now there IS a legitimate issue for Jeremy Allison and the Samba project – since they want to support cross-platform administration of windows machines, they need to understand those registry keys – that’s why they are the 0.000001% of the people who need that information.

    But for windows developers writing windows APIs (as opposed to linux developers writing tools to administrate NT machines), every one of the interfaces that’s available is totally documented.

    As far as I know, MSI has been a Windows component since Win2000.

  17. Anonymous says:

    That’s very interesting history. I had suspected that at some point in the past the registry wasn’t implemented in the kernel.

  18. Anonymous says:

    By the way, this topic’s gone off track over time. The thesis is that the documentation of Windows/DOS/whatever APIs is what allowed developers to write to the platform.

    It’s not that everything in Windows is documented well enough for someone to either (a) interoperate with it from another platform (as the wine project is doing) or (b) permit someone from another platform to interoperate (as the Samba project is doing).

    I’m not saying that these aren’t worthy endeavours (I’m agnostic on them). Whether or not the platform is well enough documented for Wine or Samba to do their job isn’t the issue IMHO. It’s if the platform is well enough documented for Windows developers to do their job.

  19. Anonymous says:

    Oh, and Wine Developer – the backing store for the registry has always been in the kernel (since device drivers need to be able to call into it). In the original implementation, the user mode code simply used RPC to talk to the screg.exe service which implemented the Win32 abstraction of the NT registry (which is subtly different from the kernel NT registry API). The DDK has decent documentation of the NT registry APIs.

  20. Anonymous says:

    I’m not sure there’s much competitive advantage in IsUserAnAdmin() – it saves about 20 lines of code (which are supplied in the documentation for CheckTokenMembership!).

    Arguably, it also solves the wrong problem – the program should be checking whether the user has the appropriate permissions or privileges to perform the requested task. NT has a great fine-grained access control and privilege mechanism; using IsUserAnAdmin (which component uses this previously undocumented function?) reverts to the bad old UNIX model of all-privileged admin, no-privilege user.

  21. Anonymous says:

    Larry, you wrote "You can’t get the name of a file from the NT native API. At least not for many many files. We had a discussion on this in an internal alias just the other day."

    NtQuerySystemInformation() with info level SystemHandleInformation returns information on all open handles in the system, including their names (if any).

  22. Anonymous says:

    Cool Keith :). Does it work for Sockets? My point was just that there are many classes of files that don’t have filenames and that’s why the API doesn’t work 🙂

    Btw, while SystemHandleInformation is undocumented, NtQuerySystemInformation is documented:


    Btw2, what’re you up to these days?

  23. Anonymous says:

    Mike, was this a wrong-blog thingy? I didn’t see anything about "IsUserAnAdmin()".

  24. Anonymous says:

    For sockets, it returns DeviceAfd. This *is* the name of the file object, but it’s not terribly useful. (Of course, you can just key of this name, then call sockets APIs to get the local/remote addresses, etc).

    Something that *IS* impossible to get (I think) is a list of open file *objects* for a process. It’s quite possible to read/write a file after closing the handle:

    hf = CreateFile( … );

    hm = CreateFileMapping( hf, … );

    CloseHandle( hf );

    pv = MapViewOfFile( hm, … );

    CloseHandle( hm );

    Now you can read and write the file through the mapped view, even though you have neither the file handle nor the section handle open.

    Anyway, enough NT trivia.

    What am I up to? Not much — just hanging out, writing silly message in people’s blogs.

  25. Anonymous says:

    Sorry, I missed a step: it’s one of the settlement program APIs.

    IMO, there was a reason they weren’t documented – they largely weren’t all that useful. I don’t think a convincing case for revealing additional APIs (that then have to be supported, Raymond must be cursing 😉 ) was actually made, but everyone believed the competitive-advantage argument, so the DOJ asked for it as part of the settlement.

    A list of the APIs can be found at http://msdn.microsoft.com/library/en-us/dnapiover/html/api-overview.asp, although in some cases the function was documented but some parameters were omitted (based on a comparison between what’s on the site now and the MSDN Library January 2001 I still use with VC6).

  26. Anonymous says:

    Even though I’m a blue badge, I’ll have to disagree with your assertion. Like Warren Buffett notes, successful companies are ones with moats.

    You are correct: We all use Wintel (except I use Macs too… partially because I’m in MacBU).

    But notice we don’t use WinIBM. Aside from IBM’s many blunders, IBM had no sustainable business model for building PC’s. They had no moat. By making everything open, they had no revenue stream. All it would take was a Compaq (the Dell of then) to build the same product, but cheaper. And consumers would buy that.

    Being in the "standards" business is like being in the commodities business. And life sucks as a commodity.

    Tivo’s current business (PVR’s) is destined to be doomed as well. They have no moat. Anyone can build a device that provides similar functionality, at a lower cost or with more (WinMCE).

    Microsoft’s success is two pronged: by providing a great platform for developers developers developers developers, and through the amazing power of licensing.

    But you’d better be ready to be in the licesning business, or you are dead. Apple tried that mistake in the mid 90’s by licesning their OS to clone companies – a fatal mistake for a fundamentally hardware company.

  27. Anonymous says:

    An interesting point Dennis, and clearly we disagree. I actually thing that Apple shouldn’t be in the hardware business.

    The problem they had is that they tried to go it both ways – they tried to license their software AND keep on selling hardware, they really needed to pick one.

  28. Anonymous says:


    Found this informative entry on Google, would you consider submitting a version to Wikipedia , which has no entry for the IBM PC Technical Reference Manual ? (It’s just referenced on the page defining ‘BIOS’ right now).