It’s the platform, Silly!

I’ve been mulling writing this one for a while, and I ran into the comment below the other day which inspired me to go further, so here goes.

Back in May, Jim Gosling was interviewed by Asia Computer Weekly.  In the interview, he commented:

One of the biggest problems in the Linux world is there is no such thing as Linux. There are like 300 different releases of Linux out there. They are all close but they are not the same. In particular, they are not close enough that if you are a software developer, you can develop one that can run on the others.

He’s completely right, IMHO.  Just like the IBM PC’s documented architecture meant that people could create PC’s that were perfect hardware clones of IBM’s PCs (thus ensuring that the hardware was the same across PCs), Microsoft’s platform stability means that you could write for one platform and trust that it works on every machine running on that platform.

There are huge numbers of people who’ve forgotten what the early days of the computer industry were like.  When I started working, most software was custom, or was tied to a piece of hardware.  My mother worked as the executive director for the American Association of Physicists in Medicine.  When she started working there (in the early 1980’s), most of the word processing was done on old Wang word processors.  These were dedicated machines that did one thing – they ran a custom word processing application that Wang wrote to go with the machine.  If you wanted to computerize the records of your business, you had two choices: You could buy a minicomputer and pay a programmer several thousand dollars to come up with a solution that exactly met your business needs.  Or you could buy a pre-packaged solution for that minicomputer.  That solution would also cost several thousand dollars, but it wouldn’t necessarily meet your needs.

A large portion of the reason that these solutions were so expensive is that the hardware cost was so high.  The general purpose computers that were available cost tens or hundreds of thousands of dollars and required expensive facilities to manage.  So there weren’t many of them, which means that companies like Unilogic (makers of the Scribe word processing software, written by Brian Reid) charged hundreds of thousands of dollars for installations and tightly managed their code – you bought a license for the software that lasted only a year or so, after which you had to renew it (it was particularly ugly when Scribe’s license ran out (it happened at CMU once by accident) – the program would delete itself off the hard disk).

PC’s started coming out in the late 1970’s, but there weren’t that many commercial software packages available for them.  One problems developers encountered was that the machines had limited resources, but beyond that, software developers had to write for a specific platform – the hardware was different for all of these machines, as was the operating system and introducing a new platform linearly increases the amount of testing required.  If it takes two testers to test for one platform, it’ll take four testers to test two platforms, six testers to test three platforms, etc (this isn’t totally accurate, there are economies of scale, but in general the principal applies – the more platforms you support, the higher your test resources required).

There WERE successful business solutions for the early PCs, Visicalc first came out for the Apple ][, for example.  But they were few and far between, and were limited to a single hardware platform (again, because the test and development costs of writing to multiple platforms are prohibitive).

Then the IBM PC came out, with a documented hardware design (it wasn’t really open like “open source”, since only IBM contributed to the design process, but it was fully documented).  And with the IBM PC came a standard OS platform, MS-DOS (actually IBM offered three or four different operating systems, including CP/M and the UCSD P-system but MS-DOS was the one that took off).  In fact, Visicalc was one of the first applications ported to MS-DOS btw, it was ported to DOS 2.0. But it wasn’t until 1983ish, with the introduction of Lotus 1-2-3, that PC was seen as a business tool and people flocked to it. 

But the platform still wasn’t completely stable.  The problem was that while MS-DOS did a great job of virtualizing the system storage (with the FAT filesystem)  keyboard and memory, it did a lousy job of providing access to the screen and printers.  The only built-in support for the screen was a simple teletype-like console output mechanism.  The only way to get color output or the ability to position text on the screen was to load a replacement console driver, ANSI.SYS.

Obviously, most ISVs (like Lotus) weren’t willing to deal with this performance issue, so they started writing directly to the video hardware.  On the original IBM PC, that wasn’t that big a deal – there were two choices, CGA or MDA (Color Graphics Adapter and Monochrome Display Adapter).  Two choices, two code paths to test.  So the test cost was manageable for most ISVs.  Of course, the hardware world didn’t stay still.  Hercules came out with their graphics adapter for the IBM monochrome monitor.  Now we have three paths.  Then IBM came out with the EGA and VGA.  Now we have FIVE paths to test.  Most of these were compatible with the basic CGA/MDA, but not all, and they all had different ways of providing their enhancements.  Some had some “unique” hardware features, like the write-only hardware registers on the EGA.

At the same time as these display adapter improvements were coming, disks were also improving – first 5 ¼ inch floppies, then 10M hard disks, then 20M hard disks, then 30M.  And system memory increased from 16K to 32K to 64K to 256K to 640K.  Throughout all of it, the MS-DOS filesystem and memory interfaces continued to provide a consistent API to code to.  So developers continued to write to the MS-DOS filesystem APIs and grumbled about the costs of testing the various video combinations.

But even so, vendors flocked to MS-DOS.  The combination of a consistent hardware platform and a consistent software interface to that platform was an unbelievably attractive combination.  At the time, the major competition to MS-DOS was Unix and the various DR-DOS variants, but none of them provided the same level of consistency.  If you wanted to program to Unix, you had to chose between Solaris, 4.2BSD, AIX, IRIX, or any of the other variants.  Each of which was a totally different platform.  Solaris’ signals behaved subtly differently from AIX, etc.  Even though the platforms were ostensibly the same, they were enough subtle differences so that you either wrote for only one platform, or you took on the burden of running the complete test matrix on EVERY version of the platform you supported.  If you ever look at the source code to an application written for *nix, you can see this quite clearly – there are literally dozens of conditional compilation options for the various platforms.

On MS-DOS, on the other hand, if your app worked on an IBM PC, your app worked on a Compaq.  Because of the effort put forward to ensure upwards compatibility of applications, if your application ran on DOS 2.0, it ran on DOS 3.0 (modulo some minor issues related to FCB I/O).  Because the platforms were almost identical, your app would continue to run.   This commitment to platform stability has continued to this day – Visicalc from DOS 2.0 still runs on Windows XP.

This meant that you could target the entire ecosystem of IBM PC compatible hardware with a single test pass, which significantly reduced your costs.   You still had to deal with the video and printer issue however.

Now along came Windows 1.0.  It virtualized the video and printing interfaces providing, for the first time, a consistent view of ALL the hardware on the computer, not just disk and memory.  Now apps could write to one API interface and not worry about the underlying hardware.  Windows took care of all the nasty bits of dealing with the various vagaries of hardware.  This meant that you had an even more stable platform to test against than you had before.  Again, this is a huge improvement for ISV’s developing software – they no longer had to wonder about the video or printing subsystem’s inconsistencies.

Windows still wasn’t an attractive platform to build on, since it had the same memory constraints as DOS had.  Windows 3.0 fixed that, allowing for a consistent API that finally relieved the 640K memory barrier.

Fast forward to 1993 – NT 3.1 comes out providing the Win32 API set.  Once again, you have a consistent set of APIs that abstracts the hardware and provides a constant API set.  Win9x, when it came out continued the tradition.  Again, the API is consistent.  Apps written to Win32g (the subset of Win32 intended for Win 3.1) still run on Windows XP without modification.  One set of development costs, one set of test costs.  The platform is stable.  With the Unix derivatives, you still had to either target a single platform or bear the costs of testing against all the different variants.

In 1995, Sun announced its new Java technology would be introduced to the world.  Its biggest promise was that it would, like Windows, deliver platform independent stability.  In addition, it promised cross-operating system stability.  If you wrote to Java, you’d be guaranteed that your app would run on every JVM in the world.  In other words, it would finally provide application authors the same level of platform stability that Windows provided, and it would go Windows one better by providing the same level of stability across multiple hardware and operating system platforms.

In Jim Gosling post, he’s just expressing his frustration with fact that Linux isn’t a completely stable platform.  Since Java is supposed to provide a totally stable platform for application development, just like Windows needs to smooth out differences between the hardware on the PC, Java needs to smooth out the differences between operating systems.

The problem is that Linux platforms AREN’T totally stable.  The problem is that while the kernel might be the same on all distributions (and it’s not, since different distributions use different versions of the kernel), the other applications that make up the distribution might not.  Java needs to be able to smooth out ALL the differences in the platform, since its bread and butter is providing a stable platform.  If some Java facilities require things outside the basic kernel, then they’ve got to deal with all the vagaries of the different versions of the external components.  As Jim commented, “They are all close, but not the same.”  These differences aren’t that big a deal for someone writing an open source application, since the open source methodology fights against packaged software development.  Think about it: How many non open-source software products can you name that are written for open source operating systems?  What distributions do they support?  Does Oracle support other Linux distributions other than Red Hat Enterprise?  The reason that there are so few is that the cost of development for the various “Linux” derivatives is close to prohibitive for most shrink-wrapped software vendors; instead they pick a single distribution and use that (thus guaranteeing a stable platform).

For open source applications, the cost of testing and support is pushed from the developer of the package to the end-user.  It’s no longer the responsibility of the author of the software to guarantee that their software works on a given customer’s machine, since the customer has the source, they can fix the problem themselves.

In my honest opinion, platform stability is the single thing that Microsoft’s monoculture has brought to the PC industry.  Sure, there’s a monoculture, but that means that developers only have to write to a single API.  They only have to test on a single platform.  The code that works on a Dell works on a Compaq, works on a Sue’s Hardware Special.  If an application runs on Windows NT 3.1, it’ll continue to run on Windows XP.

And as a result of the total stability of the platform, a vendor like Lotus can write a shrink-wrapped application like Lotus 1-2-3 and sell it to hundreds of millions of users and be able to guarantee that their application will run the same on every single customer’s machine. 

What this does is to allow Lotus to reduce the price of their software product.  Instead of a software product costing tens of thousands of dollars, software products costs have fallen to the point where you can buy a fully featured word processor for under $130.  

Without this platform stability, the testing and development costs go through the roof, and software costs escalate enormously.

When I started working in the industry, there was no volume market for fully featured shrink wrapped software, which meant that it wasn’t possible to amortize the costs of development over millions of units sold. 

The existence of a stable platform has allowed the industry to grow and flourish.  Without a stable platform, development and test costs would rise and those costs would be passed onto the customer.

Having a software monoculture is NOT necessarily an evil. 

Comments (68)

  1. As an example of how different Linux distributions are I was trying to VPN into my workplace this weekend using SuSE 9.1 Professional. Using PPTP Client ( and following their directions for SuSE 9.1 (my distribution, mind you) I was unable to connect to my VPN.

    They instruct you to disregard dependencies that are based on RedHat for the graphical configuration program. I did and tried to get the graphical program working. No go.

    All in all I spent nearly 6 hours playing with everything before I managed to connect. After I had connected successfully I discovered that data wouldn’t send. No matter what I did, data would not traverse the newly created tunnel. My guess is that the negotiated key for data encryption is being lost somewhere along the way…

    Connecting with Windows XP takes about 45 seconds. And before anyone complains about "I bet you’re connecting to other Microsoft products!" — I’m not. I’m connecting to a WatchGuard FireBox which runs, ironically enough, an embedded version of Linux as its operating system.

    This is but one example. I’ve always toyed with Linux — I have CDs for Red Hat Linux 2.1 from geez… 1995, 1996? As it has grown older it has become *more* difficult for me to use and configure because distributions change so much. Should this software install in /opt/bin or /usr/bin? Where are my configuration files? Every package becomes a nightmare to install and use — there are no standards, and even when there are standards the distributions don’t enforce them, so they become effectively meaningless.

    Of course most of this rant applies to X. On the command line, from a raw system, Linux can do what you need it to. Just don’t install anything but the bare minimum and install from source. (ala Gentoo) Because you have a bare bones system you can decide where software goes, how it goes there, and why. I have seven programs to modify the volume of my sound card, but nothing to configure a VPN tunnel for my use.

    And don’t get me wrong on the latest trend… naming icons on the menu after their purpose, rather than what they are. Yes, if a program has an icon of a notebook I expect it to be a word procesor of some sort. I don’t need "Word Processor" spelled out in front of it. Tell me what the heck it is called: Open Office Writer. This gives me a unique description to latch onto and describe to folks, where Word Processor may be abiWord on another machine. *Not good.*

    Windows is more of an evolution of a specific vehicle. Windows NT, Windows 2K, Windows XP are all SUVs. They require a lot of gas but they have more power. Linux is more like different car models entirely. The engine is based on the same core components, and a lot of the interior is somewhat the same, but the exterior of the car, color scheme, radio type, automatic or manual… these are all decisions left to the distribution.

    It’s a confusing mess. As a software developer I’m trying to hedge my bets by becoming familiar with the differences between .NET on Win32 and Mono on Linux, but so far it has been an expensive and painful battle.

  2. senkwe says:

    My crystal ball tells me that you will be slashdotted in the next few hours with a headline saying "MS developer says MS monoculture is NOT evil" 🙂 And no, I’m not planning on submitting the story 😀

  3. Mo says:

    <em>In particular, they are not close enough that if you are a software developer, you can develop one that can run on the others.</em>

    Bzzt. Wrong.

    If you only care about <em>binary</em> compatibility, then there aren’t too many cases where systems differ. You just need to know what you’re doing – and many people quite evidently don’t.

    If it’s source-level compatibility you care about, things are generally much easier. Write your code in conformance with the published specs, rather than what you <em>think</em> the specs say, and don’t mis-use feature-test macros, and you’ll be fine. You still need to know what you’re doing, of course, but it’s a lot easier.

    Linux isn’t a platform, however. The reason Richard Stallman bangs on about calling it ‘GNU/Linux’ isn’t just ego-driven: Linux is just a kernel. It’s sort of like the difference (though this isn’t a fantastic analogy) between Win32 and NT – Win32 is the platform, NT is the OS.

    The biggest problem with binary-only distributions of software for Linux-based platforms is that vendors make assumptions. They assume certain configuration files will be in a specific place, when they might not necessarily be there. They assume certain libraries will both be present, and of a specific version, and that the distribution uses a particular naming scheme (rather than checking, or shipping their product with those libraries included). If they’re unable to figure out a way to get around these assumptions, then they should talk to the developers of the software they’re making assumptions about instead of blindly carrying on anyway, causing hell for users.

  4. Mo, you’ve EXACTLY made my point in your comment. Because the configuration files are in different places on different distributions means that the vendor has to TEST all those different distributions.

    Testing adds to costs. Costs get passed onto users. If you’re selling an application that costs $10,000 a copy, then your price supports the cost of testing all those configurations. When your app costs $129, you can’t afford the test costs. The monoculture allows for the $129 word processing application, it couldn’t exist without it.

    Amd senkwe: Maybe so. If /. picks this up, then so be it. I’ve said what I meant, and I meant what I said (An elephant’s faithful 100%?)

  5. Mike Dunn says:

    It can be really frustrating to get one binary that works all the way back to Win 98 (which is what I support in my work – I can’t be like MS and ignore the 9x users).

    I have to worry about different shell versions, IE verions, common control versions, bugs or different behavior among those, and on top of that know which APIs are only in 2000/XP.

    For ex, do I use SHGetSpecialFolderLocation, SHGetSpecialFolderPath, SHGetFolderPath, or…? While the existence and availability of the various APIs are well-documented, the UNavailability is not – tell me, which CSIDL_* values work on 9x? NT4? With shfolder.dll and without shfolder.dll? I have no way of knowing without writing a test app to call the SHGet* functions with all the CSIDL_* values.

  6. matthew says:

    I was enjoying your post until I got up to the bit about DRDOS not providing the consistency of MSDOS.

    Say what? Care to explain this? Apart of course from Microsoft’s own deliberate code that would disable Windows from running on DR DOS.

    I ran DRDOS 6.0 and it was better than what Microsoft had at the time (MSDOS 4).

  7. Mike: Hmm… You have a good point Mike, there CAN be a bewildering set of options to consider.

    With a smidge of digging (I searched for CSIDL and clicked on the 2nd link), I found this from MSDN:

    The information there, combined with:

    appears to pretty thoroughly document which CSIDL_ versions work with which versions of the common controls, and what platforms those controls are used on.

    Is there more info that you need to know?

  8. matthew, you’re right, I unnecessarily tarred DR-DOS with the inconsistancy brush. DR-DOS was just as platform stable as MS-DOS.

    As long as you used DR-DOS as a MS-DOS replacement (and it was a good replacement, you are right), and stayed away from DR-DOS’s multitasking extensions (which then tied you to the DR-DOS platform).

  9. Drew says:

    Mike does have a point about usability/discoverability. Why can’t devs fire up MSDN, select a minimum/maximum OS for their apps to run on, and have it display only the information that they can use? It would be cool if VS could do the same with Intellisense. Imagine how much better life would be if VS suggested alternate APIs to use because of the platform(s) on which you told it you wanted to run your app.

  10. Wow. Drew, that is an AWESOME idea. You can do that for C++/C#/VB in the CLR sections, it would be fascinating if the same could be done for the mainline MSDN documentation.

    Now the challenge is finding an MSDN person to suggest this to. I’ll ask around.

  11. Jeff says:

    “The problem is that Linux platforms AREN’T totally stable.”

    Anyone making that statement does not have a lot of experience with a Linux operating system. Our Linux servers uptime is rated in quarters / years while our Windows servers are rated in days / weeks. Check out netcraft for the the Internet’s best uptime servers and you’ll see that BSD/Linux servers own the top 100.

    “The problem is that while the kernel might be the same on all distributions (and it’s not, since different distributions use different versions of the kernel), the other applications that make up the distribution might not.”

    Last time I checked, Microsoft still sells a desktop version of Windows vs. a Professional vs. a Server grade operating system. Thus, different distributions of Linux are designed with different goals in mind.

    I don’t think very many people will argue that the desktop market for Linux hasn’t evolved yet, but the same can be argued for why Microsoft is still hasn’t evolved into the server market as much as they’d like to have. Again, look for the Apache vs. IIS wars and why IIS gained strong ground in 2002 it has dropped back down to it’s 2000 level.

    “Java needs to be able to smooth out ALL the differences in the platform, since its bread and butter is providing a stable platform. If some Java facilities require things outside the basic kernel, then they’ve got to deal with all the vagaries of the different versions of the external components.”

    As far as Java’s bread and butter goes, Java was built to be portable not tied to a particular operating system. Java has great support in Linux/BSD operating systems and this doesn’t seem to be a problem. The lava installation is a binary distribution and it’s external dependencies are next to nothing. Even the desktop features are tied more into X then KDE or Gnome.

    “How many non open-source software products can you name that are written for open source operating systems? “

    Does does Sun’s Java, Oracle, IBM’s Lotus Notes, Word Perfect, RealPlayer, Adobe Acrobat, and VMWare not to mention at least a dozen games pop off the top of my head (Quake III even before the source code release). Not bad for an operating system that focuses on the server market more the desktop.

    “What distributions do they support? Does Oracle support other Linux distributions other than Red Hat Enterprise? The reason that there are so few is that the cost of development for the various “Linux” derivatives is close to prohibitive for most shrink-wrapped software vendors; instead they pick a single distribution and use that (thus guaranteeing a stable platform).”

    Red Hat has focused on the Enterprise market and have built a solid reputation so it’s only natural that Oracle would choose that Vendor because of their business. However, while I’m not 100% certain on RedHat as being the only Linux distribution, Oracle runs on FreeBSD. I think the key here is which Operating does Oracle run better on? Linux/BSD or Windows (Search on google for that answer or ask your local friendly Oracle Rep.).

  12. Jeff, platform stability doesn’t refer to how long a particular computer stays up. It has to do with how much a platform changes from machine to machine, from version to version.

    Win32 applications written for XP Home run on XP server, and they run on XP pro. In general, this is true for all platforms (Mike Dunn did correctly point out that there ARE platform differences, but they’re relatively limited). He’s right, it CAN be a challenge writing an application using todays SDK that will run on Win95, unless you are careful to stay within the well documented boundaries of Windows as it existed in 1995. But the applications written for Win95 still run on Windows today (unless they were written with dependencies outside the Win32 API set).

    The Windows operating system IS differentiated into different products. But the Windows PLATFORM is the same regardless. If your app runs Windows XP home, your app is almost certainly going to run on XP Pro. And it’s almost guaranteed to run on W2K3 server as well (it might not because W2K3 server enterprise edition doesn’t have audio enabled by default, and if your app depends on audio, it might fail, that’s one of the platform differences).

    You cannot make the same statement about differing Linux distributions. That’s what Jim Gosling was complaining about – Java VM’s have to be tested on every possible Linux distribution, because Java is a shrink-wrap solution. And that testing is expensive.

    Going through the rest of your list of non open source Linux applications…

    The Real Networks player is described as being a "user supported" player. From the web site, it looks like it’s an open source distribution, so it doesn’t count – it’s an open source product, not closed source.

    Adobe Reader does appear to be a shrinkwrapped copy, so that one is very real. They also don’t charge for it, so it’s clearly a loss-leader (like the free reader is for Windows), so they’re clearly recovering the costs of development of the free player somewhere else. They appear to only support Linux 2.2 on x86 computers, fwiw.

    I looked at the Corel web site and couldn’t find any indication where I could get a copy of Word Perfect for Linux. I found press releases announcing that they were doing it here (, but no software. It appears that they stopped development at the 0.9 version.

    VMWare appears to be an operating system, although it does support Linux management console machines. This IS a good example of a closed source Linux product, since the management console for Linux costs only $199, and claims to support most versions of the 2.2 or 2.4 Linux kernel.

    Lotus Notes supports ONLY the following Linux versions:Red Hat Enterprise Linux AS 2.1 (Uni- processor only), and UnitedLinux 1.0. They don’t support "Linux", they support two Linux distributions, RHEL, and UnitedLinux 1.0.

    The only Oracle platform I could find supported was RHEL, but there may be others, I just couldn’t find them – it may be on the Oracle web site, I just couldn’t find ANY indication of the platforms on which Oracle is supported – all the other vendors above listed their supported versions without too much effort, I could not make the same statement about Oracle.

    And on which platform Oracle runs better is irrelevent to my article. My thesis is simply that in the absence of a stable platform, the cost of software development is passed onto the consumer. Oracle charges tens of thousands of dollars a copy for their software, the cost of cross platform development and testing is built into the cost of their product.

    This article isn’t about whose operating system is better. It’s about the cost of software development and who pays for it. If you have a single stable (unchanging) platform, then the software vendor can reduce their development costs and produce software for far less than if your have to support multiple platforms. When software vendors costs go down, then those cost savings get passed on to the consumer.

    If you don’t have the single stable platform then you need to test all the varients of the platform. That adds to the cost of writing software. If the software vendors costs are higher, they will pass their costs onto the consumer, and software cost goes up.

    So I come back to my original conclusion: The Microsoft Monoculture has enabled cheap software. If the monoculture didn’t exist, software wouldn’t be as cheap.

  13. Dru Nelson says:

    Jeff, sorry, but I have to call BS on that.

    Yes, I can make linux run for a long time on a particular kernel, but I have to be extremely careful about what patches and hardware I use. Even then, I ran into bugs. The Linux VM does have certain race conditions.

    Larry is talking about the platform of Linux in general, and he is right. If you look at a lot of apps for Linux, they have a ton of dependencies on specific versions. If you run an older redhat, you can’t even do a source level build with some of these systems.

    Note, I have used just about every form of Unix, VMS, Windows, etc…. he’s right.

  14. ray says:

    Larry, AFAIK, win32g was the (very early)precursor to DirectX (and also seems to be a virus payload), while win32s was the 32-bit subset for Windows 3.1 and 3.11.

  15. Andrew Shuttlewood says:

    There is more stability in the Windows world than in the Linux world, but it’s nowhere near as bad as you make out.

    90% of applications will use the same (or mostly the same) kernel calls, and a version of libc. The kernel has compatibility back to very very very old versions, glibc has compatibility back to very very very old versions of glibc.

    If you use hidden features of glibc, yes, you will be screwed. But you know what, your Linux distributor can do the equivalent of compatibility shims by forcing you to load a specific library before you run.

    If instead of just assuming that a file is in /usr/bin, you have an entry in a config file "this application is located here", then you can run on pretty much any Linux platform. (Note, you should probably make it possible to load your config file from an arbitary location)

    The only key exception I can think of for most vendors is a) changing APIs for UIs (which can still run, you just need to install the (older) libraries), and b) kernel modules, which rely on the internals of the kernel to use.

  16. Mat Hall says:

    Just have to say that your link to the "fully functional WP for $130" is a bit misleading — it’s an upgrade version, so the TCO is more than $130…


  17. @#$@, You’re right Ray… Ten year old technologies 🙂

    Andrew – but are the systems binary compatible? Will a binary written to XFree86 run on all distributions?

    And Mat, you’re right, but this was the easiest and first example I could find – there are others however.

  18. David Candy says:

    When being forced to do computing for a Social Science thing (new rule was can’t graduate without doing computers) I passed the vax and went for MSDos (as I had just left a job where we had bought an AT as AT’s could print lower case and our mainframes couldn’t). Unfortunately they were apriciots, a non PC compatable. Basically it meant only Dos external commands ran or certain software from england specifically written for it.

    So none of my home programs ran on it. That meant NO EDITOR apart from edlin (I used a 4k full screen editor at home and an 8k full screen file manager [that I still sometimes use but not often] both writing to video memory).

    This is when I learnt about compatability. Nothing is more important. Nothing. Why would I buy a mac – to read reviews of programs I couldn’t use?

    After this some major Dos programs had options for Dos/BIOS/Direct Hardware writing but utilities rarely did this. But Apriciots were long gone by then.

  19. Cheryl "Mayleth" K. says:

    Very interesting article and easy to read. I enjoyed it very much. I hear so much criticism of Windows, it’s nice to hear about what it helpped accomplish for the computer industry. I had been aware that it had been the openness of IBM that aided in it’s eventual rise over the Mac. That Joe Schmoe’s Electronics could design, build, and profit from making specific pieces of hardware (or software) for the PC without having to build and design an entire computer, and OS. Thus lowering costs and exciting interest from the business world as well as for the technology inclined individuals. But I only saw it from the logical point of view of businesses, never bothering to consider how extensive the role that the “software monoculture” has played in the growth of technology in our society.

    So, thanks!! 🙂


  20. Larry: yes, a binary written for XFree86 will work on all current distributions. That was the case with, for instance, Netscape 4 (back when it was the most used browser on Unix) — there were at most two different versions (for a.out and ELF), and none was specific to a distribution.

    Most library developers avoid breaking backwards compatibility, and when they do break, they change the soname (the filename used to load the library), allowing you to have both the older version of the library and the newer version available at the same time (of course, trying to load both versions in the same process at the same time is asking for trouble). For instance, I have two versions of libpng installed here, and, and the programs use the one they were compiled with.

    A few libraries (mostly the C runtime library, do it diferently, using symbol versioning (the library has more than one version of some symbols, in cases where they changed in incompatible ways, and the program will call the one it was compiled to use). A program compiled to a newer version of the library will fail to load on an older version of the library, but a program compiled to an older version of the library will work fine with a newer version of the library.

    Of course, things can and do break when you use undocumented (or documented as internal) interfaces, or when you try to do things like distributing relocatable object files (the symbol versions aren’t fixed until you do the final linking). When that happens, you have to distribute different copies for different versions of the library (you might have to distribute one copy for glibc 2.2 and another for glibc 2.3). This happened recently with the new version of the threading library (the old version was linuxthreads, the new one is NPTL — it was a complete rewrite. Most programs didn’t notice, a few were doing things that weren’t in the standard but worked with the older implementation, and a smaller few (mostly Java and Wine) were using internal interfaces), but you could easily force a program to use the older version in that specific case.

    Things are different for device drivers — the only guaranteed way to distribute one is to have it included in the mainstream kernel sources. The internal kernel API changes with almost every minor version (with all "in-tree" drivers upgraded to match), and the ABI also varies depending on which compiler you used and which options you compiled in the kernel (nVidia avoids most of that problem by distributing a source code "shim" to interface with their binary kernel module).

    Let’s for instance look at one program that comes with XFree86, xlsfonts:

    $ readelf -d /usr/X11R6/bin/xlsfonts

    Dynamic segment at offset 0x3bd4 contains 22 entries:

    Tag Type Name/Value

    0x00000001 (NEEDED) Shared library: []

    0x00000001 (NEEDED) Shared library: []

    0x00000001 (NEEDED) Shared library: []


    Let’s see for instance In my system, it’s a symlink to, which is the file that actually has the library. In some other system, it might be a symlink to, which would be an older version of the library. However, the only ABI difference (if there is one) would be the addition of new symbols; and if a program tries to use a symbol which is not present on the library, the dynamic linker will abort loading the program with an error message.

    Notice there’s nothing distribution-specific there; every "normal" distribution (excluding strange things like mini floppy-only ones) has a /lib/ dynamic loader (which comes from glibc), a version of (again, glibc), and a version of and (both come from either XFree86 or X.Org). So, as long as you have a recent enough version of glibc and XFree86 (or X.Org), the program will run.

    This happens because a distribution is a packaged set of components, but all distributions get the same components. The differences between distributions simply aren’t that great (what you would find is differences between versions of packages, different kernel versions, and sometimes a bit of difference on filesystem layout).

    The greatest difference between distributions tend to be administrator-related: things like installation, upgrading, and configuration.

    And as to supporting a distribution — often a program will run just fine on an "unsupported" distribution. It just wasn’t tested there (and so, they can’t say "it will work" even if it will).

    (Linux user since 1997, Debian user since about 1998)

    PS: If the text flow between the paragraphs was strange above, it’s due to the lack of a preview and an unusably small comments box.

  21. Gary says:

    "If an application runs on Windows NT 3.1, it’ll continue to run on Windows XP."

    For the most part that’s true, but there are definitely exceptions. BlockInput() on Win2000 lets you create a DirectInput device that can still access your input devices. However, WinXP "fixes" this and BlockInput blocks everything, even DirectInput. Since BlockInput blocks input events I didn’t quite understand how this would impact DirectInput since DI talks to the driver and doesn’t care about events.

  22. Andrew Shuttlewood says:

    We got a binary written for libc4 (way before I even ran linux, which must be over 6 years ago) working on our uni boxes which were all libc6. All we needed was to install the library and it worked fine. This was a while ago (when I was still at uni), but the compatibility is there. Sometimes you may have to install the older versions of libraries, but it is perfectly possible to configure the library loading path and even preloaded systems.

    Like Cesar says, the only bad thing is that drivers can stop working. Of course, if somebody can find me a driver for my Sidewinder gamepad for Windows XP I’ll be a happy man..

  23. John Elliott says:

    I’ve been using Linux (various distros) since 1996. Twice (once as recently as last month) I reinstalled from scratch, and then copied over the programs I’d previously installed in /usr/local/bin. I thought it might be a fun idea to check if these programs still run.

    The oldest program is Netscape 2 (30 April 1996). It needed the libc4 libraries (see ); and I had to reinstall some of its support files which had gone missing over the years. After that, it ran; I’m using it to post this comment.

    The next oldest program is koules.svga (11 Sep 1996). This didn’t run until I replaced the SVGAlib DLLs with older ones; koules was linking to an internal variable (__svgalib_console_fd) that disappeared in later versions. Of course, the good thing about having the source is that you can see the comment accompanying this: /*quickhacked console switching.. */

    The next program in the list was Netscape 3 (21 Oct 1996); this needed libc5. Once that was installed, it and all the later programs I tried (dated 17 Nov 1996 up to 5 March 2004) worked flawlessly as well.

  24. Chris Altmann says:

    Andrew and John,

    Note who had to do the work of getting those apps to run. It wasn’t the distro maker and it wasn’t the app developer.

  25. Thank you Chris – I was out with the kids all day so I didn’t get to answer.

    Cesar, does XFreex86 exist on all Linux machines? If I’m writing a shrink-wrapped application, which window manager to I write to? Remember – I’m going to ship a binary, no source code, so I want to ship one binary for all distributions. Is that still possible?

  26. Jeff says:

    Chris, that’s no different then the countless times that most of us have had to track down a particular version of the VB runtime, manually install Direct-X, Microsoft Foundation Libraries, or even installing the .net runtime. While quality of the windows installer is exponentially better now then it was even a few years ago, running older applications still puts a lot of work on the end user.

    Running older applications takes work despite the platform so and so we can’t all assume that all applications run without work.

    Larry, XFree86 can run on Linux, Unix, all of the BSD variants, Mac OS X (via Darwin), etc. Other then router/server only distributions of Linux, XFree86 is the underline standard that “window managers” run on. If you write a binary to run on XFree86 then that’s very similar to writing a win32 application in terms that it’ll run on X Windows.

    I think a point that’s missed that many people assume that installing / upgrading applications in Windows is easier then installing them in a Linux/BSD environment. When in reality it’s far easier for most distributions of Linux and BSD. The ports tree in FreeBSD and the portage tree via Gentoo Linux make installing applications trivial. Even applications such as VMWare (which for the record is not an operating system but does allow multiple OS’s to be installed and ran similar to Virtual PC) can be installed via these applications easily and it takes care of installing the proper libraries for you if they do not exist.

    For people who are not used to it, this system allows you to install both new and old applications via a GUI application or console. These systems take the guess work out of the installation / upgrading. It’s true that Windows has the Windows Update but that only gets you Operating System related updates, while the ports style applications look at the whole system.

  27. Jeff, you missed my point. I’m selling a shrinkwrapped package. If I want to support "Linux", I need to have a platform that supports all of the features my app needs. If I can’t rely on a single graphical manager to be present, I have two choices: Not write a GUI for my app (which is usually NOT a choice if I’m writing a word processor), or write my word processor in such a way that it runs on whatever GUI happens to run on the machine.

    And that means that I’ve got to test with every possible Linux GUI. Which increases my costs. Which gets passed onto the customer.

    On Windows, I’ve got one GUI, it works everywhere. It works with every PC graphics card, with the exact same interfaces.

    If VMWare’s not an operating system, then why does it’s requirements list hardware and not platform. The requirements for the management console go into great detail about what platforms the management console runs on, but the VMWare base product has NO mention of the OS platform. The base platform VMWare platform requirements lists a set of low level hardware requirements, which implies that it doesn’t require an operating system (otherwise it’d care about the OS it ran on, and not just the hardware).

  28. Jeff says:

    Larry, the point you’re missing is that if you want to write a universally accepted Linux GUI application then write it to support XFree86. It will run in all graphical managers because it uses the same API’s that the windows managers use.

    The windows managers Gnome, KDE, etc are written on top of the X / XFree86 API’s. Each of them also have their own API of features that allow you to graphically extend them. Some developers like to use these just as some Windows programmers like to use MFC whereas some people prefer to use the XFree86 libraries directly just as some windows people prefer to use the win32 api.

    There’s a workstation version of VMWare and a Server version. VMWare workstation edition has it’s requirements in the left hand bar: Where as, you probably stumbled on their Server line which lets you run multiple Windows 2003 virtual servers in monitored isolated environments.

  29. Larry, you don’t have to worry about which window manager the user is using. He might even be using a window manager you didn’t know existed, and it will work. The window manager might even have been created after you released your software (which is probably the case with John. Netscape 2 is really old. I mean, did KDE and Gnome even exist back then?).

    That’s because all the iteration between the window manager and the application is done via a standard called ICCCM. If you want for instance a transient window of a certain size, you just set the right window manager "hints" (specified in the ICCCM) and the window manager will do its thing.

    Besides the window managers, you have the "desktop environments" (KDE and Gnome), both of which have things like a "tray area", desktop menus, etc. There is another (evolving) set of standards (the standards) which specify these things. I have ran Gnome programs which show as an icon in the "System Tray" (which is how KDE names that thing) even being written to run in a different desktop environment.

    So, what you have is a lot of standards (not unlike the web). You have the X protocol (a network protocol, used to draw everything, works everywhere, with every PC graphics card, even with remote terminals, with the exact same interfaces), you have the X library (the I told you about — it’s a standard API, which being C translates to a standard ABI when combined with the platform psABI), you have the window manager iterations (ICCCM), the standards (created mostly when Gnome was doing a thing and KDE another, to make things just work even if the program was written to run in a different desktop environment), the filesystem standards (FHS — specifies where you put things like the application data or libraries), OpenGL (plus GLX) for 3D, and the list goes on.

    You also have a lot of toolkits, which do the work of drawing the user interface elements for you (the window manager only draws the window borders and the desktop). They all do it using only the standard X protocol (plus a few extensions, which are all optional). Which one you chose doesn’t matter much (most of Gnome use either GTK1 or GTK2, KDE uses Qt, Mozilla and OpenOffice use their own, games tend to use SDL, and there’s also wxWindows and some others I forgot right now). If you don’t want to end up depending on the exact version toolkit, you can put a copy of it together with your program (Mozilla did that with some libraries in the past).

    Besides the toolkits, you have the "helper" libraries, like fontconfig (font management), libpng, zlib, and others. Since there is a single source for them, they are the same (minus version differences, same as above).

    Here I have currently running, besides KDE applications, one Java application (compiled with gcj, using SWT with GTK2), one GTK application (using GTK1), and Mozilla (with its own toolkit). All of them would work the same if I were to use Gnome instead of KDE, or a different distribution.

    VMWare is a different beast; it needs a kernel module. As I said, the kernel internal interfaces change almost daily.

    Browsing freshmeat, I found some other interesting examples: Opera, Matlab, Mathematica, Maple, a few games (but I do know there’s more than it’s listed at freshmeat; they didn’t list Quake, Quake II and other FPS, for instance), and other interesting ones (the search returned 69 items, even being probably incomplete).

  30. Anonymous says:

    Lazycoder weblog &raquo; Larry inadvertenly makes Joels point for him.

  31. That’s fascinating info Cesar, the isolation between app and window manager is far greater than I had realized. For some reason, I thought it made a difference if you were writing your app for KDE or for Gnome, but if you’re saying that an app written for KDE will work with Gnome (or rather that apps written to KDE will interoperate on the same desktop as apps written to Gnome), then that implies a far more stable platform that I realized. Because, of course most GUI applications are written to take advantage of a desktop environment – since I beleive that useful features like Cut&Paste are aspects of the desktop manager and not the Gui.

    I’ll toss out that most of the apps you mentioned above are open source, but you’re right, Opera, Matlab (requires kernel 2.4.x and glibc6 2.2.5), Maple (only supports Mandrake, Red Hat, or SuSe) Mathematica (only supports Red Hat/RHEL and Suse), et al are examples of shrink wrapped products for Linux, so their vendors have clearly found ways around the platform differences, especially for Opera (which is the only one on your list that claims to support "most major Linux distributions"). Reading the web page on how to install Quake/QuakeII for linux ( did not encourage me. The ID software page for Quake III isn’t much more encouraging:, it seems pretty clear to me that they’re not putting a lot of effort into their Linux port.

    I’ll also toss out that Maple costs $2000 a copy, Matlab costs $1900 a copy, and I couldn’t find the cost of Mathematica – they’re not listing it on their web site, which is a bad sign. So while these are shrink wrapped products, they ain’t cheap (yes, I picked the commercial version, not the student version, I’m not a student). This is probably because the market for Maple, Matlab and Mathematica isn’t that large, so they can’t rely on volume to get their profits.

  32. Quake isn’t that hard to install. Let’s begin with the most recent one (Q3A): the installation is simply marking a file as executable and running it (a lot like you would do in Windows, with the only difference being having to mark the file as executable). Most of the instructions on that page are of the "troubleshooting" variety (aka "what to do if things go wrong").

    Almost all the instructions on the page about Quake II tell how to enable hardware OpenGL acceleration, which should be enabled by default in most recent distributions. I did run the shareware version of Quake II some years ago, and didn’t need any complex command line options, and it didn’t crash (and it did run with hardware acceleration).

    Quake I didn’t support hardware acceleration, and did not support fullscreen under X11. If you wanted fullscreen, you had to use the svgalib version (svgalib is a library to directly control the video hardware. Sorta like you would do on DOS, that’s why you need to be root). Most of the instructions on that page are how to configure svgalib (the part before the "Quake I" heading). You also had to lowercase all file names (because the binary was looking for them in lowercase…), that’s why the small script is there (a simple replacement for manually renaming every single file). Of course, Quake I is really old.

    I believe the lack of proprietary (i.e. not free software) applications for Linux is because of the market. If you had to do an application for a single operating system, would you chose the one with 90% of the market share or the one with 5% (rounding really badly the numbers)? Would it be worth to make it portable for only 5% extra market share?

    (Free software programmers tend to not be terribly concerned about market share, and usually end up porting to every operating system with at least 1% market share under the sun).

  33. Btw, I just ran into David Candy’s post about the Apricot. Actually Microsoft Word worked on the Apricot, I know that, because my wife (girlfriend at the time) was the tester for Word for Apricot 🙂

  34. Cesar, people seem to be writing applications for the Macintosh (not as many as Windows, but…), and from what I understand it has either a comparable or smaller market share than Linux.

    So it’s not just the lack of market share, there must be something else.

  35. Jeff,

    Does XFree86 support things like Cut&Paste and the other things that gui applications like to have, or is that done inside the graphical manager?

  36. Mo says:

    X11 manages cut & paste, although ‘rich’ cut and paste (multiple-format C&P – and ‘delayed copy’ as Windows has) isn’t there.

    KDE and GNOME muddy the waters – they’re Window managers *and* supporting applications *and* helper libraries. But, they’re all separate. A ‘GNOME’ application is just an application that uses the GTK toolkit for its widgets and the GNOME libraries for common dialogs, etc. The same applies to a KDE application. That doesn’t mean, however, that a GNOME application will not work if you don’t use GNOME as your desktop environment – so long as the libraries are there (and they can be statically-linked, if necessary), it’ll work.

    X11 was designed to be transparent across hosts and networks, and many of the design decisions which make X apps ‘just work’ irrespective of what you happen to be using are because of this – I can quite happily run Netscape 3 for Linux on a machine in the other room, but have my ‘display’ (the X server) as XDarwin on my iBook. The ‘desktop environment’ I’m using is something that’s completely alien to Netscape – the X server is running on an OS which didn’t exist when Netscape 3 was built, and neither did the window manager – but Netscape doesn’t care, nor need to care. The abstraction there is pretty powerful.

    The only thing you lose out on in mixing and matching applications from different desktop environments (or more specifically, different toolkits) is the consistency of the look and feel. If you use GTK or QT (GNOME and KDE’s toolkits, respectively), the user can customise this – and even if you statically-link your binaries against the toolkit, the app will pick up the preferences of the user and interoperate perfectly.

    It is worth bearing in mind, though, the testing scenario: it’s generally a bad idea to produce a shrinkwrap application ‘for Linux’. As you said, you can’t test every combination – so you don’t do that. Treat the different distributions as different operating systems (after all, that’s what they are!) – release a version of Debian, a version of RedHat Enterprise Server, release a version of SuSE, and so on. If somebody wants to run your product on something else, you do a cost/benefit analysis, or tell them they’re on their own (depending on which is appropriate).

    "Linux" isn’t a platform, after all – just a kernel.

  37. Larry, I think it might be because while in the Linux market you find lots of people who either use it because it’s cheaper or will never use non-free software, that can’t happen on the MacOS front (since the OS itself is not free in both senses of the word, and it needs really expensive hardware to add). So the potential market share for your application is smaller on Linux than it might seem at a first glance.

    Mo: ‘rich’ copy and paste is definitely there, I have used it by accident another day (selected a paragraph on Mozilla and pasted it on OpenOffice, got annoyed because it magically pasted with the page’s formatting which I didn’t want, and ended up using Paste Special to paste it as Unformated Text). I don’t know what you mean by delayed copy, but if it means that the source only sends the data to the server when the destination requests it, it has worked that way since the beginning.

    I think there’s no need to have a separate version for each distribution, unless you are doing something more low-level (for instance, something that needs to patch the kernel, or something that depends on the way the distribution configures the network). You can have a single version (possibly packaged a bit differently for each distribution, but only for convenience) and say for instance "tested on Debian 3.0, Conectiva 9.0 and Conectiva 10.0". The differences aren’t great enough to warrant a different version for each one.

  38. Paul A. Howes says:

    I’m going to have to agree with Larry on this subject. I have been programming computers since the Apple ][+ was introduced. My first computer was a Laser 128 Apple //e clone. I wanted an Apple //gs at the time, but found the Mac line to be more compeling. After I got over the sticker shock of how much the Mac hardware and software cost (around the time the Quadra 700 and 900 first came out) I built a 486DX-33 with 4MB of memory and a 250MB hard drive to take to college with me. I think it had Windows 3.11 on it.

    Fast forward to today, and I have programmed for every version of Windows, multiple GNU/Linux distributions, Solaris, Irix, AIX, HP/UX, and the VAX architecture in languages ranging from shell scripting to Basic to Java to C to C++ and now C#.

    Any time I have had to deal with Unix in any of its infinite variations, the answer I have always given and received is "recompile the app from source". I can make binaries to distribute, but they’re always dependent on what else was available on the system I used to compile it.

    I saw the comments above about using old library versions to get older applications to run, but that was completely an end-user activity. The odds (limit) of say, my 68-year old mother, being able to accomplish this approaches zero pretty rapidly. I certainly wouldn’t want tell my clients that they can’t install my app until they ensure that the appropriate (old) versions of the libraries are available.

    At this stage in my life, I prefer platform stability. I have a family to support and as a Software Engineer, selling shink-wrapped software is a tried-and-true way of making that happen. It’s nice to know that an application that I write today will run on tomorrow’s hardware and software simply because I target the Windows APIs. If I need an application to run on an older version of Windows as well, all I have to do is make sure that a symbol specifying the version is set, and the header files automatically disable appropriate features.

    One app that we developed at work will run on any version of windows from 95 to XP. We continue to add features to it, and it still passes all of the tests on each and every OS we’re responsible for supporting.

    Now that I have found C# and .NET, I am quite happy: I can write applications that stand alone or work through a Web site, or both! The same libraries will work even though the presentation layer remains the same. All configuration is accomplished through XML files. Installations can be as simple as copying the EXE or DLL into a directory, just like back in the DOS 3.x days.

  39. Jeff says:

    Paul, in the said case of your 68-year old mother, I guess I don’t see how a shrink wrapped version of a Linux application would be all that different then a windows application. Case in point, a lot of windows applications include required windows libraries either on their CD or as part of their installation process. (I started seeing quite a few as of late that include the .net runtime on their CD’s) How would including an older versions of Linux libraries with an application be any different then with how you distribute a traditional windows installation?

    We still have a majority of computer users who still have their kids (whether their 12 or in their 30’s) install applications for them out of fear of messing up their computer. I work for a large ISP (broadband and dial-up) / Software Design / Hosting company and we see these types of people day in and day out. I guess I see the dilemma as this, if you develop a new application and you want it to be backwards compatibly then you still have to identify what libraries are needed and possibly include some of them to make it application work. For me this has always been a given whether I created a windows application or a Linux application.

    From my personal experience, I keep reasonably up to date on the latest version of Visual Studio, and I have had very few problems importing old code and getting them to run in the newer environments. However, when I recompile them or start a new application, I typically have problems when I try to run them on older machines without finding the appropriate version of a needed DLL. I have had a few problems with my code compiling in Visual Studio 2003, but it was nothing that a *one time* investment of a half hour to few hours wouldn’t solve. With our internal software, many times we’ve had to include other libraries (ranging from the socket2 libraries for the win95 days to the newer XML libraries to read those nice config files to numerous other libraries.)

    Larry, Paul, or anyone, with the advent of Microsoft’s new platform shifts looming, what do you guys think of the win32 API being phased out or will it be? In my ignorance, I’ll reference Joel Spolsky’s article ( on the subject of two camps at Microsoft, where one wants to keep the API’s in tact (compatible) whereas the other one wants to push forward and break ties to the older API’s / versions of Windows. It sounds like support for older versions of Windows by using the win32 API is in jeopardy. I’ve already noticed that the newer .net runtime supports Windows 98+ but not Windows 95 which I doubt impacts too many people compared to how many people would be isolated if Windows 98 wasn’t supported.

  40. The Win32 API will never go away, as long as Microsoft’s still making operating systems.

    Heck, the DOS 2.0 API hasn’t gone away and it’s not likely to go away any time soon.

    Win32 may not get the neatest and coolest features over time (stuff might be available only for WinFX in the future, for example), but the existing APIs are not going away.

  41. Saurabh Jain says:

    I might be a bit late in the party, but I believe every API’s documentation tells you on which OS it is support (at the bottom of it).

    If the documentation were attributed (TargetOS attribute) correctly (sadly it is not), one could have used filters not to show documentation. One example on how to use filters is at Relief from CE Documentation (

    There are a few ways to file a suggestion for MSDN documentation (like clicking on the "Send Feedback to Microsoft" link at the bottom of each API documentation), but the best way is to file a bug/suggestion at

    MSDN Feedback Center: –

    Then pass reference around for people to vote on it. Note that MSDN Feedback Center only applies to VS 2005 beta 1. The reason I am suggesting you to file a bug there is because Help and MSDN team are part of VS 2005 and your feedback will reach the correct people.



  42. Parallel Install says:

    One of the things most non-Unix people assume about Unix is that it works mostly like Windows does.

    Larry, you asked earlier if a binary for XFree would work on any version of XFree. First you need to understnad how XFree86 works.

    XFree86 is a server. Clients, programs, connect to it, and speak an established protocol (like IMAP, HTTP, and thousands others) to instruct the display to draw things. As long as the protocol remains compatible: and it has for 15 years.

    Additionally, say your program uses GTK 1.4. That is the widget toolkit, which you use to draw windows and such. The loader does not load gtk.dll in C:WINDOWS, instead, it loads in /usr/lib. See the version numbers? The binary declares what version it needs to link to, and the linker satisfies it.

    A bit about GTK. One uses GTK, and functions in it, to create Windows, buttons, hook up events, and the like. All GTK does is speak to X, using the standard protocol, and carry out those activities. Thus, you can have a GTK 1.4 program, and a GTK 2.4 program (near total rewrites), running at the same time, on the same desktop, interacting with each other properly. One program links to one version, the other links to the other, they both talk to X.

    Okay. Now say a vendor wants to distribute a peice of software that uses GTK 1.4. Well, the vendor wants it to work for everybody! And he can sort of guarentee GTK 1.4 exists on earlier versions, but really, he can’t. It would be better for him not to. So he just copies a GTK 1.4 version into his own package, and links against that. Now he’s built his own stable base. There is nothing wrong with this, and many vendors do it. Simple copy/paste of some files. 😉 The copyright specifically allows this redistribution.

    Where you need to worry about instability is really ONLY hardware related things… kernel modules, XFree drivers. But is this different with Windows? I remember the great WDM change!

    IMO this kernel module thing is something that needs to be worked on… definatly. We need a standard base, and commitment to maintain it for the future, with DRIVERS. But that’s really hard, and nobody really has a problem with it right now.

  43. Jerry Haltom says:

    I posted as Parallel Install. Next time I’ll read the input fields more carefully. 😉

  44. Jerry Haltom says:

    Few more points: Now, I’m not totally sure about Windows and how it deals with symbol names and the like:

    Being able to install two version of one API at a time is a massively good thing for our software.

    Consider the GTK example I alluded to. GTK2.4 is a very clean API, no backwards compatibility maintained, no kludges to maintain it. It does not adhear to the GTK 1.4 API at all. But GTK1.4 programs STILL WORK EXACTLY AS THEY DID (we can’t introduce bugs into what we don’t change). This lets you very cleanly go on with development, without worrying about backwards compatibility at all! Total refactor of the API, without a thought to backwards compatibility.

    But we can still guarentee GTK1.4 apps will work, because nobody edits GTK 1.4.

  45. Jerry,

    That only works if GTK1.4 is completely self-contained. If the library depends on something (like the behavior of signal()), and the behavior of signal() changes, you’re toast.

    Does the GTK1.4 binary from SuSE work on RedHat without modification? How about Debien?

    Eventually you get to something that depends on the OS (or on file location, or something else).

    The idea that the library is a clean isolation layer works fine for some libraries (like cryptographic libraries), but without a significant amount of engineering on the part of the library author, fails miserably in the general case when it’s put in practice.

  46. Jerry Haltom says:

    We know we have some difficulties in this area. They can however be solved… in fact I think RedHat decided to solve them on their own.

    XFree86 works thusly: The user hits copy from Application A, Application A notifies XFree that it owns the clipboard. User pastes to Application B, Application B asks X for the clipboard, which retrieves it from A. Notice the retrieving from A happened only when B pasted.

    This is annoying when the user closes A before he pastes in B, as A is no longer around to provide the data. There are a number of solutions… which a distro should be including by default!

    The reason it is done like this is for content negotiation. Application A doesn’t know what format to provide data in until Application B tells it what it accepts: jpeg, gif, plain text, RTF, HTML, etc. There are lots of possibilities. Do we generate every possibility on copy??? These are questions not yet answered perfectly. :0 And so, nobody has fixed it.

    Windows does a sort of hybrid approach. It does exactly like X does up until the application closes, at which point it provides a primary data type to a system clipboard. This means content negotation breaks when you close an application. Usually the user doesn’t notice this. I have before. And when I notice it it really bugs me.

    *shrugs* There are a number of ideas floating to fix this. But yeah, it’s a problem.

  47. Jerry Haltom says:

    Larry. You’re absolutly right! GTK1.4 may depend on a dozen differnet things… most of which, up to an excluding libc, can be packaged with the application. Libc would take some work. 😉

    You are absolutely right about the signal stuff too. I would though contend that relying on the functionality of signals to the point of them being broken between releases, is tantamount to hard coding paths to C:WINNT into installation routines…. which I’ve seen done. That is why you pick APIs which can be guarenteed stable. GTK1.4 being an example. It abstracts stuff such as signal management away from the application code. Choose to code directly to the OS version? Sure, you can do it, but it’s not going to be portable.

    Eh. That might be a bit of a weak point. *shrugs*

  48. Jerry Haltom says:

    Whoops, forgot to answer your question: "Does the GTK1.4 binary from SuSE work on RedHat without modification? How about Debien?" Yes. That’s what VMWare does.

  49. Next part of the question, and this may not be relevant to Linux, but certainly is to *nix: How do you deal with things like AIX signals don’t work the same way as FreeBSD signals (IIRC, one supports suspend, the other doesn’t).

    And again, keep in mind that this is about binary distributions, not about source…

  50. Jerry Haltom says:

    How often do you go about sending signals to processes in your code in the course of normal application development on Windows?

    If you’re dealing with Signals, you are most likely dealing with fairly low level process management stuff, or you are in fact writing your own toolkit… or maybe a complex peice of server software. For your mom’s spreadsheet application? Never.

    For the majority of development: you are doing toolkit things. Making windows, running algorithms. Writing to GTK guarentees that your code will run on any platform GTK does.

    Say though you are programming a toolkit, like GTK: This is handled in GTK by C preprocessor statements, which are automatically created by the autoconf/make process.

    #ifdef BUILD_ARCH_AIX

    // do one thing


    This doesn’t solve your question, it really just throws into question your question. :0 For the majority of development, the same stuff that happens in Windows, you are dealing with toolkit things, not OS things.

  51. Jerry Haltom says:

    I don’t want you to get me wrong. We don’t have answers for everything. One can not just sit down at a Linux IDE and make a program, yet. But we are getting there.

    Consider things like file type associations. We have just now finally came to an agreement about how to handle these things crossplatform.

    The open source desktop is maturing more rapidly than you can imagine. In a scant two years since the introduction of Gnome 2.0, we have created what, imo, is a suitable Windows replacement for the "majority of office workers without external dependencies". My personal office admining experience tells me that is about 25% of people working in an office. Those that use web based application, and office applications. And they work WONDERFULLY. It has a pretty GUI, a comprehensive set of basic applications, and the standards to govern all of that are forming. It’s not just a collection of ad-hoc programs anymore. We have a comprehensive HIG and other stuff.

    (I speak as a Gnome developer. This is where I am and what I work on, and what I see.)

  52. For windows, there’s just one platform. It’s a constant. With few exceptions (many pointed out above), windows behaves the exact same on all platforms (Win16 under Win32s, Win95, Win98, WinME, WinNT, Win2000, WinXP, WinCE).

    Since the platform works the same on all versions, your cost of developing software is orders of magnitude less.

  53. Keith Williams says:

    Oracle 9.2 runs on Suse SLES8–very well I might add…

  54. Jerry Haltom says:

    You are right. It’s a well known FACT that having to consider only one possibility is easier. Weither or not this creates a healthy market is another thing entirely.

    Consider the car market. It’s very similar to how the OS market could be.

    You have lots of different people manufacturing cars… but at the end of the day a new stereo works in most of them (except those new GM trucks that are putting the alarm system in the radio). Sure it’s not perfect, but by and large, consumers have shown many times over, they prefer choice instead of none. And they will soon have it.

    Do you really think the car market would be Better if there were only one car manufacturor? We don’t know HOW a mature PC market would function with multiple vendors. All we can do is look at what we have now and conjecture. I think it would be great though. ;0

    I didn’t want this to turn into a closed/open source argument… the likes of which have been done many times over. I just wanted to educate a number of the readers about a few of the things that make Linux good… and about why most of the reaction to it is overblown, as I believe it is. I don’t dare to think one day MS will not be an OS seller. That won’t happen, and would be contrary to my mission statement: choice. I would just love to think that one day my mom can go to a PC store and have a choice about what she wants to lay her money down for, maybe she can shop around and find the better price/vendor… just like she can with a car.

    I sort of have this vision of the future that goes like this. My mom walks into a computer store and checks out the PCs on display. She goes over to a Dell. It’s running Dell OS. It’s based on Linux of course, but my mom doesn’t know that. It’s just a cool, Dell specific desktop. With Dell specific innovation and customization. And then she can go look at teh gateways… and lo and behold, every application looks like a Cow. In fact some applications emit audible Moos. (okay that’s taking it a bit far. 😉 I dunno. I have an optimistic look at it. *shrugs*

    Only time will tell man. The "standard linux desktop base" is moving at light speed though. People are embracing it. It’s only a matter of time before people start demanding applications for it, and a matter of time after thatn for vendors to start providing them. Yes, we have tons of problems to work around… none of them aren’t doable. Want a standard base? Make a Java program. It’ll run on Linux and Windows. There’s nothing wrong with that.

  55. Of course I can’t do that stereo replacement in my Ford Taurus, or in my Chrysler Pacifica, each of which has a non rectangular radio console… And the Ford Taurus is still one of the best selling cars in America, despite having had a non replacable radio for years. The inability to replace the radio in a Ford Taurus aparently isn’t enough of a negative to effect Fords sales.

    And as far as Open Source vs. Close Source: I’ve tried VERY hard to avoid religious arguments in this discussion.

    My mom’s a bad example (she’s a Mac person), so I’ll use Abby, our persona for the stereotypical AOL mom. Abby’s pretty simple in her software choices. What she cares about is the packages on the shelf. If she can drive to her local CompUSA and find a store filled with boxes that say that they run on her Dell OS, then she’ll buy the Dell OS computer.

    If she goes to the store and finds boxes that say that they support Debian, SuSE and RedHat, she won’t buy the Dell OS PC – there’s no software for that PC, the only software she can find is for Debian, SuSE, and RedHat.

    And unless/until the Linux platform becomes as rock solid stable (as in unchanging stable, not as in not-crashing stable) across ALL distributions, across ALL versions, then shrinkwrapped manufacturers won’t start shipping products for that platform.

    Right now, the discussion above makes it clear that with the exception of Opera (which claims to run on "most Linux distributions") all the shrinkwrapped software mentioned is software that already has accepted the cost of multi-platform support, and they treat each Linux distribution they support as a separate platform.

    Now it apparently IS possible to write full featured binaries that support all Linux distributions without modification, so I’m not sure what’s stopping the shrinkwrapped software market from expanding into Linux as a platform.

    I do know that the reason preventing shrink-wrapped software on Linux isn’t market share, since the Mac has comparable or smaller market share than Linux and there are a significant number of shrink-wrapped applications available for the Mac. There must be some other reason.

  56. Jerry Haltom says:

    "Now it apparently IS possible to write full featured binaries that support all Linux distributions without modification, so I’m not sure what’s stopping the shrinkwrapped software market from expanding into Linux as a platform."

    Experience and understanding. It took me about 3 months of exclusive Linux use to come to grips with the idea of "oh, i’ll just bundle everything up together, there’s nothing wrnog with it. It’s not dirty."… and I consider myself a pretty smart programmer. It’s a massive paradigm shift for a developer to come to their own conclusion about how they can package their own software, vs firing up InstallShield and having it "done for them". It’s not a massive investment though. It would take me, a single person, about 2 hours to put together a binary installer for almost any peice of Linux software. Of course I would need to fire up VMware and test this installation on a number of Distros… maybe 8 hours of work for ALL distros. 10 hours of work total for software setup.

    You just need experience, which these company’s do not have.

    I have experience packaging my own custom Windows applications and can tell you for a fact that it is hard. What version of MDAC do you need? Do we include that ADO dll in the installer, or include the MDAC distributable? The app uses Crystal Reports? Does there license allow redistribution. Oh it does, against CRXDRT.dll, but not CRXDDRT.dll. Time to fix code. What about ADO 2.5 which changes the event model of the DataGrid as to make 2.4 code not compile properly? Wait, I need my software to support both ADO 2.6 and 2.7 AT THE SAME TIME?

    (I deal mostly with small office database front end programs if you can tell)

    Windows installer throws error 1608? FATA_ERROR? WHAT DOES THAT MEAN (delt with that 4 times this week). 1603? 1010? So I have to install SQL Enterprise Manager before I install MDAC or it deletes all my OLEDB database drivers?

    It ain’t as easy as you make it out to be.

    On the other side of the fence, I just compile my program, grab some .so’s, stick them into a directory structure, and write a little bash script to copy a script to /usr/bin, ask my user where he wants to install it, copy it there, make sure my script exports LD_PRELOAD_PATH. Lots of intense technical work, but it’s not THAT BAD, for somebody who develops software for a living. And if I wanted to I could just use one of the nifty GUI installers for Linux, like you find with Quake3, Oracle, etc. They almost duplicate installshield’s UI to a tee.

    Sure, lots of differences… no show stoppers.

  57. > Next part of the question, and this may not be relevant to Linux, but certainly is to *nix: How do you deal with things like AIX signals don’t work the same way as FreeBSD signals (IIRC, one supports suspend, the other doesn’t).

    > And again, keep in mind that this is about binary distributions, not about source…

    A single binary won’t run directly on both AIX and FreeBSD (unless you use the linux emulation mode). So it ends up being about source code.

    When you have things like Linux binaries on FreeBSD, it is running on an "emulation" layer which translates the kernel’s API, and using the Linux version of all the libraries, so the behaviour is the same (if it isn’t, it’s a bug either in FreeBSD or in Linux, or you are depending on behaviour which could change). Curiously, FreeBSD can emulate Linux, but nobody has made a FreeBSD emulator for Linux.

    So, you have to port the source code. You can do that easily with things like autoconf, which you can use to ask "does this system support sigaction", for instance (sigaction solves your signal problem when it’s available; you can ask it for either SYSV or BSD behaviour, see . When you don’t have it, you can do things one way that works in both: reinstall the handler within itself, but don’t depend on it unistalling automatically.)

  58. Cristian Gutierrez says:


    And unless/until the Linux platform becomes as rock solid stable (as

    in unchanging stable, not as in not-crashing stable) across ALL

    distributions, across ALL versions, then shrinkwrapped manufacturers

    won’t start shipping products for that platform.


    There are distributions for every need and scenario: floppy disk based

    firewall/router, multimedia editing, on-boot movie playing, LiveCD (and

    Live-mini-CD!) desktop, rescue and forensics, etc.

    I guess ‘ALL’ would involve only "major distributions" and those aimed

    to moms&pops. That’s like a couple dozens of several *hundreds* distros

    of every size, colors and smell 😉


    Right now, the discussion above makes it clear that with the exception

    of Opera (which claims to run on "most Linux distributions") all the

    shrinkwrapped software mentioned is software that already has accepted

    the cost of multi-platform support, and they treat each Linux

    distribution they support as a separate platform.


    Even more, Opera delivers both a major-distro packaged version (for use

    with the package manager of your distro), or a generic binary that works

    in most of them. They even offer, in both those categories, statically

    and dinamically linked versions (against QT).

    I’d say it’s pretty impressive the amount of in-house expertise and

    dedication they put on the deployment of Opera for Linux; I’ve even seen

    them fullfiling requests of specific sub-distro flavour packaging, only

    for the convenience of advanced users.



    I do know that the reason preventing shrink-wrapped software on Linux

    isn’t market share, since the Mac has comparable or smaller market

    share than Linux and there are a significant number of

    shrink-wrappedapplications available for the Mac. There must be some

    other reason.


    There was already an opinion above in this thread, by Cesar: Mac people

    are generally more willing to spend cash on software, at least than the

    average Linux user. I understand that the rationale for this is that

    they already had to pay for a not-so-cheap machine, so it’s more likely

    that other costs are perceived as marginal. But then again, it’s a guess


    PS: Sorry if the quotation and/or justification gets messed up; it’s hard to follow threads on linear comments! ("Usenet, where art thou?" 😉

  59. mschaef says:

    "Without this platform stability, the testing and development costs go through the roof, and software costs escalate enormously.

    The existence of a stable platform has allowed the industry to grow and flourish. Without a stable platform, development and test costs would rise"

    I have three (increasingly shrill 😉 objections to your line of reasoning:

    The first is that a diverse community of software is good for the same reason as is a diverse community of biology. From a security perspective, a network composed of multiple types of host is going to be more difficult to compromise than a network composed of one type of host. Given that the network effect scales exponentially with the number of susceptable hosts, even halving the number of potential targets for a virus/etc can make huge difference to the overall impact. The other argument for diversity (more dear to my heart) is that diversity encourages thinking outside the box. Windows is a good solution to the problem of running a computer, but it’s far from the only solution. To the extent that diverse systems bring other ideas into the computing ecosystem and open systems allow exploration of new ideas, a closed software monoculture has the potential to stifle a lot of the innovation so many of us value.

    Second, as Mike Dunn and others point out, there have been many changes to Windows over the years. It’s been at the point for a long time that ISV’s have to not only test on each supported version of Windows, but each service pack and browser update. (I know this because I’ve spent too much time with test matrices that include 10-20 different variaties of Windows installations.) To me, this situation doesn’t look any better than the situation facing vendors of Linux software. In fact, if I test on two versions of Linux that behave differently, I can develop an understanding of the difference by reading the sources and the developers’ dialog on mailing lists. Under Windows, I’m limited to a handful of useful blogs, the oftentimes incomplete MSDN documentation, and the folklore spread through and various FAQ’s.

    The other objection I have to your argument is that Microsoft seems to be doing everything it can to make the situation worse. If a monoculture and stable platform are such good things then why has Microsoft introduced 2 new presentation layer stacks, not to mention programming languages, in the last 3-4 years? Developers that want to target Longhorn as a first class application will _have_ to develop a seperate Avalon presentation layer (another code path) apart from the Win32/GDI/GDI++ presentation layer that will still be required to support older versions of Windows. This is not more testing, this isn’t even as incremental a port as the jump from Win16->Win32, this is a complete rewrite, possibly in a different language altogether. Even worse, this all comes on the heals on Microsoft’s last next generation development environment, Windows Forms, etc. I mean come on, I understand the need to move the platform forward, but surely there was a better way than this?

    PS: "Of course I can’t do that stereo replacement in my Ford Taurus, or in my Chrysler Pacifica, each of which has a non rectangular radio console… And the Ford Taurus is still one of the best selling cars in America, despite having had a non replacable radio for years."

    I don’t know about the Pacifica, but I walk by a late model Ford Taurus with an aftermarket radio every day I’m home. I think Crutchfield sells kits…

  60. mschaef: A diverse community of software is good. A diverse community of platforms is not necessarily as good. More browsers==good. More operating systems==not necessarily good.

    I’ve got code I wrote for Win16 that still runs in Windows XP. The platform is upwards compatible, it’s not downwards compatible. If you want to write code that works on all versions from win98 on, then develop and test on win98, it’ll continue to work on XP.

    The more platforms that a vendor needs to test on increases their costs. Increased vendor costs==increased customer costs. The reality is that the world doesn’t need word processors that costs tens of thousands of dollars. With the exception of Opera, every one of the cross-platform apps mentioned above costs thousands of dollars.

  61. mschaef says:

    Thanks for responding, Mr. Osterman. In a different order, I have a couple comments (as you might expect).

    "More browsers==good. More operating systems==not necessarily good."

    In this particular case, I don’t see the distinction. If I’m developing a web application, the browser is my presentation platform, just as if I’m developing a rich client application, the operating system is my presentation platform. So, if a unified and consistent platform is important for developers of rich client apps, why is it any less important for web applications?

    "mschaef: A diverse community of software is good. A diverse community of platforms is not necessarily as good."

    I’d argue that platform diversity is _essential_ to having software diversity. I can’t speak for others, but for me different platforms bring with them different baggage, different models for software design, and different cultural values. Linux’s openness compared to Windows is just one such example of the value of having access to a different platform. If monoculture costs us that level of openness, or keeps OSX folks from working in their favored style, then it’s an ultimate loss.

    Also, it frankly bothers me to hear about the virtues of platform monoculture from an employee of a company in the midst of developing an entirely new platform for application development (WinFX). The undertone is that Microsoft somehow is better able to make platform decisions than the rest of us…. (Which isn’t necessarily true, BTW. 🙂

    "The platform is upwards compatible, it’s not downwards compatible. If you want to write code that works on all versions from win98 on, then develop and test on win98, it’ll continue to work on XP. "

    The question is: could you comfortably ship a product that was entirely developed and tested under Windows 98 and still claimed to be supported under XP?

    "The reality is that the world doesn’t need word processors that costs tens of thousands of dollars."

    That’s more than a little hyperbolic. Microsoft Word has been cross platform for close to two decades, run on Unix for several years, and costs under $500 when bundled with Excel and friends. I even think it’s profitable…

  62. mschaef: Actually Microsoft Word (the original one for MS-DOS) was ORIGINALLY cross platform, my wife worked on it actually :). But Word for Windows isn’t. The Mac version is a totally separate product. Separate teams, separate test organizations, same name.

    Could you ship a product developed and tested under Windows 98 and claim to be supported under XP? You’d need to TEST on XP, of course. You might have to make changes if you wanted to meet XP logo requirements (setup and run as a non admin, for example). But the CODE developed on Win98 should work on XP, you want to sanity check to ensure that this is the case, but… That reduces your development costs by several orders of magnitude.

    And WinFX is a new platform, that’s true. But the old platform is still there, and it’s not going away any time soon. There’s nothing stopping you from writing to the existing Windows platform. Visicalc’s going to continue to run on Windows for a LONG time (likely to be as long as Intel keeps on making IA32 processors). Some of the features in WinFX (like animated buttons) won’t necessarily be a part of the old platform, if you want to take advantage of that feature, you need to move to the new platform. But you don’t HAVE to run to the new platform.

    Just like Carbon and Cocoa on the Mac – you don’t HAVE to switch platforms, you can continue to use Carbon (or Cocoa, I keep on getting them confused) and use the legacy APIs on the new OS.

  63. mschaef says:

    Wow… that was fast.

    "mschaef: Actually Microsoft Word (the original one for MS-DOS) was ORIGINALLY cross platform, my wife worked on it actually :). But Word for Windows isn’t. The Mac version is a totally separate product. Separate teams, separate test organizations, same name. "

    Now… you see… if Word had been open source, I would have known that. 🙂

    In seriousness, are you saying that Microsoft completely reimplemented Word for the Macintosh and the two products don’t share any code at all?


    My semi-educated (but very outside-MS) guess has always been along the lines that the Mac Word code base is still ultimately a derivative of the MacWord/WinWord 6 code base. Of course, the interface code has to have changed a great deal, and I assume that the old Win32-on-MacOS translation layer is pretty frequently bypassed, but I was guessing that the MacBU team still does what they can to keep the page layout code as closely synched with the WinWord code tree as possible.

    Part of my rationale for this was that I’ve been quite impressed by the fidelity of documents moving back and forth between Mac and Win word. It seems too good for there to be two totally seperate layout engines, etc., particularly given the relatively low historical staffing level of MacBU (~160, IIRC). If a team that size reimplemented Word in the last 6-7 years while simultaneously releasing multiple versions of Office/etc., then I’m utterly amazed.


    "Could you ship a product developed and tested under Windows 98 and claim to be supported under XP? You’d need to TEST on XP, of course. "

    And that’s what I’m getting at. I just don’t see the difference between testing on all the different versions of Windows (including combinations of service packs and IE releases) and testing on all the different versions of Linux. If there’s a bright side to this, I suppose the number of viable combinations of Windows releases is going down as the platform ages.

    "And WinFX is a new platform, that’s true. But the old platform is still there, and it’s not going away any time soon."

    My biggest concern about this is that applications that restrict themselves to Win32 style API’s are going to obviously be second class citizens in the new world. At least with Win32, I can call GetProcAddress and gracefully degrade if an API is unsupported. To support Avalon it’s an entirely new presentation layer, or at the very least, some Avalon-specific controls stuck in my Win32 window.

    "There’s nothing stopping you from writing to the existing Windows platform. Visicalc’s going to continue to run on Windows for a LONG time "

    I’d have a hard time selling software if I restricted myself to Visicalc’s platform, or even Win16.

    "(likely to be as long as Intel keeps on making IA32 processors). "

    My guess is longer…emulation works pretty well these days. 🙂

    "Some of the features in WinFX (like animated buttons) won’t necessarily be a part of the old platform, if you want to take advantage of that feature, you need to move to the new platform. "

    "Just like Carbon and Cocoa on the Mac – you don’t HAVE to switch platforms, you can continue to use Carbon (or Cocoa, I keep on getting them confused) and use the legacy APIs on the new OS. "

    I don’t believe that Carbon’s deprecated, so I think Apple is pretty actively committed to keeping Carbon apps on the same footing as Cocoa apps. Based on how Microsoft really failed to move Win16 forward after introducing Windows 95, I have less faith that y’all will let me keep relatively up to date without a rewrite to Avalon. (Which means two seperate presentation layers until WinFX gains enough of the market to ignore Win32).

    The saddest thing about WinFX to me is that I’ve always viewed one of Microsoft’s biggest assets for developers is that they pay attention to their past when moving the platform forward. Win32 was closely modeled after Win16 to ease the transition to 32-bit. MFC was kept pretty close to Win16 to make it easier to learn for Windows developers. Even Windows Forms has a lot of hooks down into Win32. All of these platforms could have been made better by abandoning the baggage, but my impression has always been that Microsoft was trying to honor folks who made investments in learning and developing for their core API’s. It doesn’t feel that way this time around.

  64. Check out the Microsoft Mac bloggers for more info: is a good start. At one point in time the code was common but today they are two separate teams.

    About all the different windows versions being different platforms: The difference between one windows version and the next is minimal, because Microsoft spends millions and millions of dollars to ensure that the differences are minimal. The same cannot be said about open source platforms – when you have the source, and can recompile the product, you need only maintain source level compatibility from version to version, you do NOT have the same requirements for binary compatibility. You also have the luxury of saying "Hey, your app is broken, we won’t change the OS to fix the broken app, it’s the app’s fault". When you are writing a platform for binary applications, you need to change the platform to get the apps to work. Check out Raymond’s blog ( for some great examples of this.

    Carbon’s not deprecated – that’s my point – it’s the Mac’s legacy API. Cocoa is the new API that gets the new features.

    I agree with you about WinFX btw. I too wish it had been closer to the previous versions. The .Net framework (absent avalon) is fairly close actually, but Avalon is a total break.

  65. mschaef says:

    Heh… those are two of my favoriate blogs, but it’s been a long time since I read Mr Schaut’s. After reading The Old New Thing a little, I do fully understand the desire to start fresh.

    "Carbon’s not deprecated – that’s my point – it’s the Mac’s legacy API. "

    I think it’s more than just the legacy API. I don’t know if this stayed true in OS X 10.3, but for a long time, the Finder itself was written in Carbon. I somehow don’t think that much of Longhorn’s Explorer will be done in Win32.

    "The .Net framework (absent avalon) is fairly close actually, but Avalon is a total break."

    Yeah, .Net is nice, I just wish I had more time to play with it. FWIW, I am looking forwards to working with Avalon. For projects that adopt it, I’m pretty confident it’ll deliver on its promise.

  66. Cristian Gutierrez says:

    It seems that privilege separation in NT kernel is somewhat disruptive of previous model; check;en-us;307091 and you’ll find a list of software packages (some even made by MS) that are "not designed for Windows XP", and therefore perform badly or not at all when running as "Limited User".

    Last I time I checked, Unix apps made to be run by non-root accounts ages ago are still running in the same conditions (provided library compatibility is solved, in one of the ways already mentioned here). That behaviour hasn’t changed much, and I’ll venture to guess that not many "backwards-compatibility modes" are being stuck in the Linux kernel, for example.

    One of the advantages of an ‘evolutionary’ and mature platform, instead of (highly touted) revolution so many years 🙂

  67. Christian, you’re right. Win16 (and Win95’s) lack of security STILL plagues us to this day.

    I’m just hoping that going forward, we’ll be able to do something about it. The good news is that our logo requirements do encourage people to run as an admin, and I’ve been quite surprised at how much DOES work as a non-admin.

  68. Oh, and Christian, I’d change the "evolutionary and mature" to "Designed for multi-user operation from day one". If Unix had been intended for workstation machines it would have just as many issues as Windows. But because it was intended for multiple users, privilege separation was baked in from the beginning.