Young Turks


Ok, this is a bit of a rant.  I recently encountered an email exchange from someone I respect where the person in question asked (more-or-less) “I can’t, for the life of me, see why on earth this particular piece of functionality exists in Windows”.

Now this person is somewhat younger than I (ok, most everyone in the industry is somewhat younger than I), but he is a super smart guy.

The thing is, he has NO CLUE about how the personal computer world operated back in the early 80’s when Windows was designed.  Windows was designed to run on machines with 512K of RAM, on machines with a 10M hard disk.  In addition, the CPU on which Windows was intended to run didn’t support memory protection, so the concept of “separation of privilege” was meaningless.  MS-DOS (on which Windows 1.0 was built) had a long history of putting critical OS information into an application’s data space.  For Windows, things were no different – the line between application and system was often blurred. 

Whenever there was a possibility of offloading potentially optional functionality onto the running application, Windows took it.  Instead of having a preemptive scheduler, Windows used a cooperative scheduler.  That meant that applications never had to deal with ugly issues like synchronization of data, etc.  The consequence of this cooperation was that a single errant Windows application could hang all the running applications. 

But that was ok, because the overhead of the infrastructure to FIX the problem (per-application message queues, etc) would have meant that Windows wouldn’t be able to run on its target systems.  And adding all that extra stuff really wouldn’t make that much of a difference since the applications were all running in the same address space (along with Windows and the operating system).

So it’s not surprising that there were a lot of things present in the early versions of Windows that would make people cringe today.  Sometimes this isn’t a problem, but one of the key values of the Windows platform is that Microsoft very rarely intentionally breaks applications.  We’ll break applications when the applications depend on a security flaw, and sometime applications will break when there’s a fundamental architectural shift occurring (we already know that some multimedia apps are broken in Vista because they depend on being able to call multimedia APIs during DLL initialization, which only worked by luck in XP). 

But barring that, Microsoft’s made a strong commitment to not break customers applications.  The good thing is that it means that the Windows platform is remarkably stable.  Many applications written for Windows 1.0 still run on Windows Vista.  It means that corporations that have made an investment in technology aren’t going to lose that investment by moving to a newer version of Windows.  It also means means that every version of Windows carries forward the designs from previous versions.

If there was any “mistake” made, it was Microsoft’s unceasing commitment to backwards compatibility.  And I personally believe that a huge part of the reason for Windows success in the marketplace IS that commitment.  If we didn’t have it, people would have moved onto other platforms long ago.

So when someone starts questioning why ancient stuff exists in Windows, they really need to understand the environment in which those decisions were made.  Part of the value of being a young turk is that they challenge the decisions that were made by their elders.  But before you decide to challenge an earlier decision, you need to understand the environment in which the decision was made.  Sometimes what no longer makes sense did at one time.

 

 

Btw, before people start claiming that this was somehow “Microsoft’s” fault, the original Mac OS had many of the same issues, it was designed to run on a machine with 128K of RAM and didn’t even HAVE a hard disk – it only supported a 400k floppy disk.  The designers of the Mac OS made many of the same decisions that the Windows designers did (Mac OS was also a cooperative multitasking environment), in addition, the Mac designers went even further and put significant parts of the OS into the system ROMs on the Mac, further blurring the lines between application and system.

Comments (42)

  1. PatriotB says:

    Can you tell us which piece of functionality this person was talking about?

  2. Dave says:

    I assume this was regarding the WMF problem? I agree that the original functionality made sense for the era in which it was designed.

    Now that we live in an era of bountiful system resources, it seems like Microsoft has gone the other direction though. They did not preserve much compatibility between .NET and pre-.NET as VB6 developers discovered. The .NET framework is large, and users need to keep multiple copies of it on their systems. Since a lot of system services still use COM, .NET developers still need to have an understanding of COM but now have to go through yet another confusing layer to get to it. This doesn’t feel like progress.

  3. PatriotB, no, because it really doesn’t matter.

    Dave, Actually the COM vs .Net is an example of preserving compatibility. There’s a staggering amount functionality deployed both by MS and 3rd parties that’s available via COM, if we want to allow .Net apps to leverage that functionality, interop is the only way to go (at least for the medium term).

    The VB.Net vs VB6 issue is a different issue, you’d have to talk to someone who actually knows about the real reasons for rational commentary on the issue – anything you’ll get from me is rampant speculation.

  4. Nutzo says:

    Hi Larry,

    have heard about a project going on in

    Microsoft that is about create an OS from

    scratch (although there are no plans to

    make it a commercial stuff, it is more like

    a technological ‘game’). Wouldn’t be

    reasonable to create something like that

    as a commercial product so that:

    a.) to get rid of the legacy stuff

    b.) let the market decide which to use

  5. Clinton Pierce says:

    "the original Mac OS had many of the same issues…"

    But didn’t have the Microsoft culture of "backward compatability at all costs" and many of those issues have gone away.

    I’m not sure whether this is an argument for or against that culture, but it’s successfully been employed before.

  6. Scott says:

    "The designers of the Mac OS made many of the same decisions that the Windows designers did (Mac OS was also a cooperative multitasking environment),"

    Plus Apple users didn’t get protected memory and pre-emptive multitasking until OS X. At which point there was much rejoicing.

    Yay!

  7. Andreas Johansson says:

    Your story about the memory footprint of the LAN manager 2.0 with Bill probably does not make much sense to developers that today have megabytes of RAM available. "8 BEEEP basics!"

    :)

  8. Rob says:

    If I’m not mistaken Windows 1.0 was 16-bit and I seem to recall that 16-bit support has been dropped on Vista. Wouldn’t that mean that Windows 1.0 applications (or anything before 95?) would not run on Vista?

    Just wondering…

  9. News to me that 16 bit support is dropped in Vista, last I saw it still was in. What was dropped was 16 bit support on x64 platforms. And it’s my understanding that that happened because the CPU couldn’t support 16, 32 and 64bit code running at the same time.

  10. msemack says:

    Clinton Pierce,

    "But didn’t have the Microsoft culture of "backward compatability at all costs" and many of those issues have gone away. "

    This issues have gone away, but look at the market. Apple is a VERY minor player in the industry, compared to the PC market. Clearly, Microsoft’s approach was the "winning" one.

    "I’m not sure whether this is an argument for or against that culture, but it’s successfully been employed before."

    I would argue that it wasn’t successful. Apple has slipped from being the single largest maker of personal computers to having a tiny (somewhere around 3%) marketshare. To me, that doesn’t sound like a corporate success story.

  11. Norman Diamond says:

    > the original Mac OS had many of the same

    > issues, it was designed to run on a machine

    > with 128K of RAM and didn’t even HAVE a hard

    > disk

    That sounds like the same hardware capabilities that MS-DOS 1.0 needed, but the Mac OS provided a graphical interface that Microsoft started to copy some years later than MS-DOS 1.0.

    The absence of hard disk support was a boon for MS-DOS 1.0. Microsoft’s drivers or utilities could not cause the entire contents of hard disk partitions to be trashed.

    Backwards compatibility played a big role in MS-DOS 1.0. The manuals boasted that assembly language programmers could make system calls using the same familiar code they used in CP/M, giving MS-DOS 1.0 more compatibility than CP/M-86 had. (Though this didn’t make a bit of difference to anyone who had programmed in assembly language on IBM mainframes, Digital VAX computers and PDP non-computers, and a bunch of other long forgotten machines.)

  12. Vorn says:

    Window’s success was more from the wide variety of manufacturers capable of slamming together a system that would run Windows. Apple was very strict about not letting other manufacturers in, and this cost them the market. The most dramatic example of compatibility breaking by Apple – OS X – actually improved their market share because it was a lot easier to program for it. Having a UNIX core didn’t hurt, and the compatibility layer given by Classic is top quality (I have as yet only found two applications that break under it, but I must admit I haven’t really been looking.)

    Vorn

  13. Vorn says:

    …Not that backwards compatibility doesn’t help. Every once in a great while I find a need for some epically old Windows software, and being able to put it in my XP Pro machine and use it like it was 1989 all over again, except with better performance and a bit less crashing, sure is nice, and I thank the Windows team profusely for making sure that is possible.

    Vorn

  14. . says:

    >>.Not that backwards compatibility doesn’t help.

    Dosbox supports Windows 3.x now. By the way, so do VMWare, Bochs, VirtualPC (it depends whether you want an emulator or a virtual machine).

  15. on says:

    "the original Mac OS had many of the same issues…"

    And what about these copying issues? I asked it on several MS blogs and forums but nobody did or wanted to reply.

    Why people think that MS copies Apple? I don’t believe it. For instance now they say MS copied Exposé to create Flip3D (but i think Apple copied TaskGallery from MS Research).

    But they claim also about hundreds of features copied from OS X. Why is that? What are the roots of these opinions? Please, Larry, say something :-)

  16. Surge says:

    Is there any intention or at least acknowledgement of the possibility of making a windows version with all the fixes, including those that break the (documented and with fixes proposed, I’m sure) compatibility issues? Maybe not Win Home, even Pro, but I sure know a lot of people that would buy a ‘best code’ version of Win 2K3. After all, in probably 99% of the cases it will run on good new hardware, and it will not be used for 16 bit win 1-2-3-9X apps or even NT/2K developed apps older than 1-2 years…

  17. vince says:

    Funny that these woefully under-equipped PCs back in the day could run fully-multitasking XENIX. And even funnier, XENIX was made by Microsoft…

    MS desperately needs to consign backwards-compatibility to some sort of emulation layer, like MacOS does. Or even Linux. I can run most DOS apps better under various DOS emulators for Linux than I can under Windows.

  18. vince, no they couldn’t. Xenix ran on the 80286 machines but to my knowledge, it never ran on the 8088. There’s a HUGE difference between the two processors.

  19. vince says:

    Xenix ran just find on an 8086. 286 support wasn’t even added until Xenix 2.0. See http://en.wikipedia.org/wiki/Xenix

    or google yourself.

    So who’s the "young turk" now?

  20. Vince, I’d love to see the 8088 port actually running. I worked across the hall from the Xenix team in 1984 when they were doing the 286 Xenix port (they were working on the 386 port as I started).

    The physical limitations of the 8088 prevented even a Xenix port (if one actually existed) from being even vaguely robust – again, no privilege separation and no address space separation means no protection.

    I believe that some members of the Xenix team read this blog, they may want to comment…

  21. Moz says:

    But it says in Wikipedia, Larry, so it must be right.

    As far as backward compatibility goes, I have great hopes for Virtual PC as a solution. If MS were to use virtualisation (especially now CPU support is there) to run 16/32 bit apps in plausible environments there’d be a lot of room to ditch old APIs. Of course, the btching from people who discovered that btheir "Vista" app was running in Win95 compatibility mode because they’d played fast and loose with their APIs would be very intense…

  22. vince says:

    Well yes, you wouldn’t get address space separation, but you can definitely do pre-emptive multi-tasking.

    I mean, a Commodore 64 can run a Unix-like operating system with pre-emptive multitasking:

    http://lng.sourceforge.net/

    What bothers me is when MS people blame the OS’s shortcomings on backwards compatibility. Yes, computers were slow back then.. but MS did a lot of short-sighted premature optimizations, and they and their users are paying for it even now. But rather than throw their immense resouces into fixing it, instead what seems to happen is all of the old stuff is documented poorly and old-hands make fun of newcomers who trip up over all the obscure legacy stuff.

  23. Abc says:

    "Dave, Actually the COM vs .Net is an example of preserving compatibility. There’s a staggering amount functionality deployed both by MS and 3rd parties that’s available via COM, if we want to allow .Net apps to leverage that functionality, interop is the only way to go (at least for the medium term)."

    At least for the medium term indeed. I always thought that "COM Interop" was the wrong name for it. A better name would have been "SafeArrayTypeMismatchException: Specified array was not of the expected type."

  24. Norman Diamond says:

    Tuesday, January 17, 2006 4:56 AM by on

    > Why people think that MS copies Apple?

    But everyone knows that Microsoft copied from Apple. A Xerox copy of a Xerox copy of an original that was made by a famous copier company.

  25. Dean Harding says:

    > The most dramatic example of compatibility breaking by Apple – OS X

    Actually, the most dramatic example of breaking compatibility was when Apple went from the Apple II to the Mac, which was 100% NOT backwards compatible.

    At least with OS X Apple tried to have some semblance of backwards compatibily with their "emulation" layer.

  26. Jim Lyon says:

    Vince is right: I personally ran Xenix on an 8086.

    Then again, when I wanted to get real work done, I ran MS-DOS.

  27. Anonymous Coward says:

    What is amazing is just how good the backwards compatibility is. The original Visicalc binary for MS-DOS 1.0 still works on XP. Grab it from:

    http://www.bricklin.com/history/vcexecutable.htm

    I actually ran it from a CIFS share and everything still worked fine.

  28. Gabe says:

    There are two important points to remember here:

    1. People will not upgrade if they don’t have to. Making a new version that doesn’t work almost exactly like the old version is pointless because it will never get used. That’s why Windows NT took almost 10 years to be universally adopted (as XP).

    2. Vestigial components are still important because they got us where we are now. For example, one’s belly button is completely useless for nearly all of one’s life, but it is impossible to be alive without it. Many aspects of Windows seem like ridiculous optimizations now, but Windows would have never been adopted without them.

  29. Daniel Jin says:

    like someone already said, virtual machines seem to be a very viable solution to the whole backward compatibility issue. (for example, I’d prefer that over the WOW layer)

    at some point, the only way to move forward is by not having to carry the dead weight around. it only gets heavier.

  30. Dean Harding says:

    The problem with a virtual machine, though, is that it’s completely isolated from the host operating system (except for standard interfaces like the fact that they expose network interfaces and stuff). Now, obviously, that’s the whole point of a virtual machine, but it makes it rather impractical for day-to-day use.

    Imagine you have an old word processor, and Windows took the "virtual machine" route to backwards compatibility. How are you going to share the documents you create in your old word processor with the rest of your applications? You’d have to set up networking in your VM, share the folder and copy it to a "local" folder in your host machine.

  31. vince says:

    > The problem with a virtual machine,

    > though, is that it’s completely

    > isolated from the host operating

    > system

    I suggest you investigate something like "dosbox". The dos emulation environment quite easily can access any file on the filesystem, without any weird hacks.

    As for Visicalc 1.0 running fine on modern windows… I would hazard a guess that Visicalc doesn’t push the envelope as far as compatibility goes.

    Try running any early 1990s era DOS game that does Soundblaster sound and fancy EMM386 memory usage and report how well that works under XP…

  32. Hi Larry,

    This is so true! I know very few companies that can demonstrate this kinda respect to the code they wrote long back and the customer applications that were built on the basis of that code. It shows the tremendous amount of commitment and thought that goes into each line of code here.

    If you are interested in computer problem cartoons, do visit my blog at http://spaces.msn.com/members/sillygloop/

    Have a great year ahead!

    Vijay

  33. Daniel Jin says:

    > Imagine you have an old word processor, and

    > Windows took the "virtual machine" route to

    > backwards compatibility. How are you going

    > to share the documents you create in your

    > old word processor with the rest of your

    > applications?

    I don’t know how virtual pc handles it. (last time it tried to install it on xp64, it won’t work) but with vmware, you can easily set up shares between the virtual machine and the host machine without going through network for example.

    the problem is however the resource required to run a virtual machine. not a big issue if you run old dos or windows 1.0. but running for example xpsp2 under 64bit vista might not be practical on all machines (compared to WOW64). and of course, you lose the look and feel of the new OS.

    but pertaining to this discussion, it’s still a great solution to fix some of the early ‘mistakes’ in windows.

  34. Dean Harding says:

    Dosbox is an emulator, not a virtual machine. It’s a very different concept.

    Anyway, I’ve made my own blog post which summarizes all my thoughts:

    http://www.codeka.com/blogs/index.php/dean/2006/01/20/title_1

  35. Moz says:

    > the problem is however the resource required to run a virtual machine.

    >not a big issue if you run old dos or windows 1.0. but running for

    >example xpsp2 under 64bit vista

    I was specifically talking about using VMs for backward compatibility, and hopefully Vista will run most if not all XP apps without problems. So the performance issues should not arise.

    The thing about using virtualisation as part of the OS to give backward compatibility is that you/they could hard-code a heap of stuff and make optimised versions of the system for specific situations. For instance, no need to include any 64 bit extensions for the 32 bit windows versions, or indeed the 32 bit extensions for the 16 bit Win/DOS stuff.

    Sharing files could likewise be hardcoded – I see a temptation to share the disk and chroot jail all the VMs. If you did something like that it would be simple to "share files" by mapping the VM c: into the users "My Documents" as a "Windows 3.11 backwards compatibilty documents" subfolder. Better minds than mine are hopefully working on this issue 😉

  36. Gabe says:

    Something that the "virtual OS" camp doesn’t realize is that programs don’t just have to run, they have to interact with other programs.

    For example, on Win64, you can have a command line that looks like this: "Win64Program.exe | DOSPROG.COM | Win32Prog.Exe | posix-cmd.exe" and run it without having to worry about how one VM will interact with the others. If you had DOS, Win32, and POSIX all running as separate system images, there would be no way to pipe the output of a program in one VM to a program in the other.

    Now imagine what would happen if a 32-bit program used a 16-bit installer (which is still too common). The 16-bit program running in its own little Win 3.1 world would be trying to install its programs to the VM’s virtual drive. It wouldn’t be able to create program groups. It wouldn’t be able to execute 32-bit DLL registration commands.

    How about trying to activate a Win95-based OLE object within a WinXP program? It’s all fine and dandy for you to run your Win95-only app in a VirtualPC image, but MS can’t decide to do that for you because it would break any program that you want to interoperate with any other program that uses a different version of the OS.

    Remember folks, MS actually used to sell XENIX and OS/2. MS used to think they were the next big thing, and advertised them as such. So how come that never happened? It all comes down to compatibility. We might all be using XENIX right now if it ran Lotus 1-2-3.

  37. vince says:

    > Remember folks, MS actually used to

    > sell XENIX and OS/2. MS used to think

    > they were the next big thing, and

    > advertised them as such. So how come

    > that never happened? It all comes

    > down to compatibility. We might all

    > be using XENIX right now if it ran

    > Lotus 1-2-3.

    OS/2 arguably ran win 3.1 programs a lot better than win95 did. Things aren’t as simple as you make them out to be, and rarely are Microsoft decisions made for technological reasons. Most often they are for marketing and/or shareholder benefit reasons.

  38. paul says:

    But it says in Wikipedia, Larry, so it must be right.

    The wikipedia entry mentions that SCO did an 8086 PORT … not MS

  39. kccole01 says:

    Xenix ran on the 8088 machine but (obviously) only in single-user mode.