Turning the blog around – End of Life issues.


I’d like to turn the blog around again and ask you all a question about end-of-life issues.

And no, it’s got nothing to do with Terry Schaivo.

Huge amounts of text have been written about Microsoft’s commitment to platform stability.

But platform stability comes with an engineering cost.  It gets expensive maintaining old code – typically it’s not written to modern coding standards, the longer that it exists, the more heavily patched it becomes, etc.

For some code that’s sufficiently old, the amount of engineering that’s needed to move the code to a new platform can become prohibitively expensive (think about what would be involved in porting code originally written for MS-DOS to a 128bit platform).

So for every API, the older it gets, the more the temptation exists to find a way of ending its viable lifetime.

On the other hand, you absolutely can’t break applications.  And not just the applications that are commercially available – If a customer’s line-of-business application fails because you decided to remove an API, you’re going to have to put the API back.

So here’s my question: Under what circumstances is it ok to remove an API from the operating system?  Must you carry them on forever?

This isn’t just a Microsoft question.  It’s a platform engineering problem – if you’re committed to a stable platform (in other words, on your platform, you’re not going to break existing applications on a new version of the platform), then you’re going to have to face these issues.

I have some opinions on this (no, really?) but I want to hear from you folks before I spout off on them.

Comments (51)

  1. Anonymous says:

    Wouldn’t it be possible to rewrite the entire OS from scratch, and run all legacy apps through a VM-type layer?

  2. Anonymous says:

    Why not ship some sort of virtual machine with Windows, and include previous operating systems to those who need them?

    I’d be very happy if all you could strip out all the dead wood that’s made it into XP purely on the strength that someone, somewhere, has an app that needs it.

    Sure, it’s nice that I can dust off my old copy of WordPerfect for Windows 6.0a and install it on XP, but I suspect that most users (myself included) would rather their install of XP was smaller, faster, and wasn’t held together in places with spit and baling wire.

    When it comes down to it, I think I’d take a fairly hard line, and if an API is superseded then I’ll support it in one more OS revision, then ditch it. (I.e. Win 3.1 APIs would have been supported in Win95, but not Win2000.) If you have on old app you need to run, why should everyone else have to suffer as a result? If it only runs in Win 3.x, then run it in Win 3.x — it’s your choice. If I went round complaining that the fan belt from my Ford Model T doesn’t work in my brand new Ford Explorer, people would laugh at me…

  3. Anonymous says:

    As an addendum to that, one thing that would help enormously with being able to get rid of old APIs is by making use of well documented and open file formats in applications. That way, if my ancient app for doing foo stops working when I upgrade to Win2027 (or whatever Longhorn ends up being called*), then I can buy a new foo app and import the data. Closed proprietary data formats mean that I _must_ be able to run the app that created it or risk losing all my data; if I knew that any similar app would be able to read it, then I’d not be so worried.

  4. Anonymous says:

    Like the previous posters said. Scrap the lot of them and provide a VM. It’s what MAC did with OS 10 and the improvements because of this were huge.

  5. cmonachan says:

    Sounds like they’re already thinking in that direction:

    http://www.enterprise-windows-it.com/story.xhtml?story_id=02200000GA1E

    But it might not be something that Larry is allowed to talk about yet…

  6. Anonymous says:

    I think that as engineers, we’re worried about it because we like elegant solutions. Old API’s are not elegant any more. But Microsoft’s customers aren’t so worried about elegance; they’re worried about getting stuff done. If you wrote an in-house Mac app in the 90’s, how irritated would you be that it won’t run on the new Mac OS X? Would you upgrade to the new Mac OS X or would you continue to use System 9?

    Web apps mitigate this cost, but there’s still a bunch of old apps that need compatibility. Just look at the point-of-sale terminal in many stores: DOS. Terrifying? Yes. Functional? Yes.

    Microsoft has just as much obligation to in-house app writers as it does to Tom, Dick, and Jane who want to check their e-mail. Realistically, they have more obligation to corporate consumers because they have more money at once. The normal consumers would need a union to represent them.

    Microsoft will support old API’s for a long, long time because that’s what supports their upgrade cycle. Upgrades make a lot of money for Microsoft (please correct me if I’m wrong on this Larry).

  7. Anonymous says:

    Larry,

    if nothing else is done about that problem, an api absolutely must be carried on forever.

    infact, i think the right question to ask is ‘what can be done in order to make it possible to remove an api?’.

    WM_MY0.02$

    thomas woelfer

  8. Anonymous says:

    I’d say to remove the API when the version of the program (OS, library, whatever) that introduced it reaches end-of-life, as long as their is a newer version of the API of course, that way you continue to have support for the ability to do something, just not support for that particular API. I think people get more bent out of shape when you completely remove the ability to do something rather than refactor and improve.

    -Jeff

  9. Anonymous says:

    I think it’s important to note, when answering this question with comparisons to Apple’s OS X, that the OS X architecture is completely separate from the Classic Mac OS architecture. If you are writing a completely new platform so that you can toss the last 15 years of cruft, then sure, it’s okay to write a new API set and provide a reasonably capable compatibility layer that old apps can use.

    On the other hand, if you’re just making incremental upgrades to the same core OS, then you need to be more careful about killing off APIs. If an old API is superseded by a more capable API, has been deprecated by documentation and compiler warnings since the new API was added, and the new API has been around, stable, for at least two, maybe three years, then the old one may be ripe for elimination.

  10. Anonymous says:

    The main problem I see with the VM method and the compatibility layer method is that in both cases you are still shipping the old code with the new OS – and thus is still needs to be kept up to security standards and the like, since people will blame MS (or the particular vendor) for any security holes that show up in these situations.

    The only thing I can think of is to allow older OS installs to run inside of VirtualPC – but do not ship the older os’s with the new one -, then apply modern security to the vectors in to and out of VirtualPC. This of course raises the question of activation and piracy inside the VM, because you run into situations where people will have issues activating windows server 2003 for the 50th time inside the VM, since they have since upgraded to windows 2253 (Really Good) edition. Or people running server os’s under a workstation os.

    So it’s almost the VM idea, except without explicitly supplying the older code. Just let them install the old copy of windows they obviously have, since they were using it.

    Looking back that is a pretty random set of comments. Heh.

  11. Jeff Parker says:

    I think personally I would do it much like the .net platform. If there is a way on old APIs, which would probably require an upgrade to the API’s. The .net platform is currently marking classes and methods Obsolete. Which means they still work now, however the next version they are not going to be there. I do this in my own apps and have found it a rather pleasant way to make changes to api’s web services ect. Oh and in marking them obsolete you also tell them what they should change to.

    So for example NT 3.51 API’s could have been marked obsolete in NT 4.0 then removed in Windows 2000. In reality in Longhorn you could be removing API’s from 2000.

    Now would this affect upgrading, yes it will. We do still have 2 NT 4.0 servers running, because they need to be there the apps they run will not upgrade to 2000, let alone 2003. However one of my objectives this year is to rebuild all the code on these to .net and put the apps on a 2003 server. Eventually everything in software must be rebuilt upgraded or abandoned. Thats just part of the software lifecycle. I will say Microsoft has done an absolute outstanding job at keeping things so backward compatable for all these years but like you said eventually it becomes so costly to maintain the old code that you have to question is it really worth it.

  12. cmonachan, that’s the first I’ve heard about that. It’s news to me.

    Shipping a VM solution is interesting (again, this is the first I’ve heard about it) but it’s not really the question I was asking.

    I wasn’t asking HOW one could end-of-life an API set, I was asking WHEN. Under what circumstances could this be done.

    What are the criteria for determining when this would be feasable.

  13. Anonymous says:

    I don’t know if it will ever be cut and dry Larry. Pulling an API is one thing, but you need to ensure something is offered to replace it, and is easy to adopt. And then cut the ties quickly or fear the wrath of the indecision.

    NetDDE is a perfect example. There was a bad escalated privilege vulnerability in Windows 2000 that was carried forward from old DDE code. DDE is pretty much dead… yet many apps still use it. (Unfortunately). DDE evolved into OLE which evolved into COM which evolved into DCOM which is now evolving into Indigo. (I think thats how it goes. Correct me if I am wrong). Pulling DDE sooner would have removed that vulnerability on W2K, or at the very least, minimize its impact. However, there was never a clean upgrade path ensuring there was a mechanism to upgrade without breaking things. And whats worse, just as people were ready to move on, it changed again. So developers didn’t touch it.

    I think EOLing an API should come with a painless mechanism or bridge to get buy in from developers. And that has to come over time that is well defined. It can’t keep changing. I wish I could give you a good example, but its difficult to think about how deep this problem goes sometimes.

    However, Longhorn is a GREAT time to break compatibility… or at the very least produce a clear and clean path for adoption to the new APIs. The platform is so different, that HOPEFULLY people would be willing to adopt sooner. Bringing back things like Indigo and Avalon to XP were a great start. It allows earlier adoption and a migration path that is easier to swallow in the software management lifecycle.

    Of course, thats a mile high view of things. I am sure each developer will have their own issues, depending on what you pull, and why. If there was an excellent API exposing some key feature my application used and you pulled it, I would probably be irrational and swear up and down about your anti-competitive behavior, unless you offered an alternative to allow me to achieve the same goal, with little impact to my codebase. 😉

    But that’s just me.

  14. Anonymous says:

    The problems with getting rid of old APIs are obvious. A VM oriented solution may be workable, but another possibility is splitting the old APIs up into a selective install package, like the Platform SDK or even as part of the OS install. The people that need the old APIs (developers, users) can download this package of APIs and install the ones they need. I realize this is a huge engineering undertaking, especially with testing, but it’d be a way to clean out a lot of APIs from the core OS install. Hopefully, if the APIs really aren’t used by many people, the trouble of installing an old API is not a big deal.

  15. Mike Dimmick says:

    The VM solution works for applications, but not for plug-ins. Witness the problems that are already being seen on Windows x64 – people complaining that Explorer shell extensions don’t work, and that SQL Server 2000 performance counters only work in the 32-bit Performance snap-in, meaning you need the 32-bit MMC, and MS shipping both 32- and 64-bit versions of IE.

  16. Anonymous says:

    To the last Jeff, that’s harder than you think.

    Can you remove a method from user32.dll and then install it when it’s needed? Chances are no on that one, you’d probably need a new user32.dll to take it’s place. Talk about DLL Hell.

    I do like the idea. It’s one of those "impossible today, tomorrow?" things where it’d be good to develop a system that allows such a thing. Of course you’d have to rework PE and how DLLs and compiled code is (or turn user32.dll into a JIT’ed assembly, which I think would be awesome). So it could be done, just not today and not on Windows. They’d have to rework basically everything but if everything moved to a .NET world theoretically it shouldn’t be that hard to fathom.

    Now for Larry’s question. I always think security trumps backwards compatibility but obviously that’s a personal preference. No API is perfect so even if you could clean up all of the bugs there may be a case where that API simply does not make sense from a security or logical standpoint. It’s hard to define specifics because I can’t think of any outside of the usual (insert really old Windows version here) APIs being obsolete on a 32-bit platform (or 64-bit). Coding practices should be a good weighing factor though not a decisive 100%.

    For instance MS pretty much has moved to a test-driven development model for the most part or at least they have quite a bit of testing that goes on with their code. If a big security threat hangs over some really old code, it may take x hours to convert that code into something the newer platform can both compile and test accurately. If you were living in yesteryear it may only take a couple of hours for a patch but because it’s so huge you have to rework everything so that it can be tested thoroughly in accordance to today’s rules. Where do you strike the balance that says the old code would take too much time to bring into today’s standards and practices? There’s nothing concrete so it’s usually on a case by case basis. I think a good indicator is if it takes you 3 weeks to convert old code versus it would have taken you 3 days under the old system, you could take that 3 weeks to easily rewrite it from scratch to be more robust rather than prod at old code to make it do new tricks. Knowing the WHEN could be put into a formula but I don’t know how practical it would be. You’d almost have to try both methods, a total rewrite versus an incremental change to see which one is actually more time consuming because we can all speculate how long it takes us to write something but that changes the second we actually sit down and do it.

  17. Anonymous says:

    About the How:

    You have the AppCompat layer in WinXP – use it. Don’t run a VM until the mainstream has processors with real virtualization support, which improves the performance and memory use of VMs like VirtualPC.

    About the When:

    I have been thinking about this for quite some time and today we had our operating systems course (well actually a Linux course currently talking about Posix Threads…) where the lecturer told us that in Kernel 2.7 they’re removing the /proc filesystem and replacing it with a brand-new, all-different /sys filesystem. Similar things must have been going on regarding threads and other parts between the stable 2.4 and stable 2.6 kernel versions. Now I am not a Linux guru and don’t care what they’re doing (yet?), but I think this is likely too many changes at once. Imagine: Microsoft replaces the entire Threading support by something brand new and gives all developers a head start of about 1-2 years to port their existing code over… Unthinkable.

    As you are working on one of the multimedia groups and I have been using your APIs a couple of times I always wondered: Why is winmm.dll still around? I mean this library is dated from when? Win 3.1? Win 3.11? It has been replaced by ActiveMovie aka DirectShow – yet DirectShow was never made feature complete (e.g. a Source Filter for CDDA is still missing in XP – wtf? winmm can play this for how long?) In my thinking, my understanding and experience DirectShow is far superior yet it lacks a couple of simple things. Why not complete them and replace winmm entirely? Then simulate winmm via AppCompat and declare the functions obsolete in the SDK… Of course this is terribly oversimplifying – but to get the basic idea: Replace an API once a new API has been *shipped*, *stabilized* and is *feature complete* with respect to the capabilities old API. Then ship one OS version with both APIs in parallel, where winmm calls into DirectShow and finally drop winmm into AppCompat support mode…

    On the other hand… How about simple evolution instead of steady revolutions as I’ve posted <a href="http://www.michaelruck.de/technical/2005/01/metadata-war.html">here</a>”>http://www.michaelruck.de/technical/2005/01/metadata-war.html">here</a> (http://www.michaelruck.de/technical/2005/01/metadata-war.html)

  18. Anonymous says:

    A: When no one uses the API anymore!?

    Q: How do you know when the API is not used anymore?

    A: During the course of a year (arbitrary), the OS builds a map of unused APIs. At the end of the year it reports them to M$. M$ becomes more informed as to the actual need for each API in the next core release. Rarely used APIs could be released as ‘on-demand extensions’.

    Q: Can this really be done efficiently???

  19. Anonymous says:

    If they won’t support it,

    then open source it!

    (whoo, it rhymes, sort of…)

    s/they/you or s/they/Microsoft as appropriate…

    The advantage is that you push the cost of maintaining the APIs onto the people who want the APIs. You are now free to remove anything you want, and people can add them back if they *really* want. The disadvantage is that your proprietary secrets aren’t anymore.

  20. Anonymous says:

    A quick thought: use something like the SxS (side-by-side) support added in XP to have multiple versions of the system DLLs.

    An executable’s subsystem version (in the PE header) would direct the PE loader (and LoadLibrary, etc.) to the correct set. Old code would use something resembling the current DLL (e.g. Kernel32) but new code targeting version x.y of the OS would refer to a revamped DLL that doesn’t export any of the deprecated APIs — no export, no usage. Each version of a particular DLL would munge parameters, forward the call, etc. in a version-specific fashion.

    Compilers and linkers would probably need some updated support to detect usage of deprecated function X on version y.z of the OS [in VC++ "__declspec(deprecated)" could become something like "__declspec(deprecated, y.z)"].

    Also, deprecate access to the Windows folder for new apps; access only via some new API.

    Just think: clean versions of Kernel, GDI and User without 10+ years of compatibility hacks; Raymond Chen’s excellent stuff can be put in a cupboard, never to see daylight again :-).

  21. Anonymous says:

    As Michael Ruck wrote, use the AppCompat layer to start flagging to users that an API is approaching obsolescence.

    It will encourage end users to upgrade to newer, more secure apps that are self repairing as well (via MSIs).

  22. Anonymous says:

    I think API should be kept around for quite a long time. That doesn’t mean legacy stuff should remain at the core of the OS forever. At a certain point it becomes very acceptable to make legacy APIs available only as optional packages or even downloads.

    I’m all for moving the core forward, keeping it lean and fast, and (most importantly) don’t let legacy stuff get in the way of doing new things in a better way. Offloading legacy libraries or even entire architectures to an emulator environment isn’t out of the question either. It’s a cheap means of still providing support while keeping the core clean, thereby drastically reducing dependencies and the strange little effects that creep in because of overall system complexity. (Like Apple did with OS9-type app support under OS X)

    Right now, if you guys think Windows is getting too crowded, it might be a good idea to retire DOS and Win3.x-type APIs off to a nice optional emulator. While I think the more drastic idea to retire a large set of Win32 alltogether to motivate vendors towards .NET and related technologies like Avalon and Indigo would have a very rejuvenating effect on the whole application landscape, that would probably not be a good business decision 😉

  23. Anonymous says:

    Two points. First point is an example – The PIF executable file format. How many customers have been infected by it? How useful is it?

    Second Point. Apple solved this by adding a OS 9 emulator to OSX, so instead of keeping up with old 9 APIs they just dumped it and started over with a nice fresh 10 set. But without breaking 9 applications.

  24. nksingh says:

    I don’t think the OS 9 emulator is _that_ great of a model. I tried running some specialized biological data analysis software on it and there were many problems.

    Why exactly do APIs need to be EOLed? If their functionalities are replaced by new APIs, is it impossible to rewrite the old APIs as shims which use the new ones underneath?

  25. Anonymous says:

    My opinion would be to move the APIs Microsoft think are due to end their life(I believe Microsoft employees should have good judgement on this)out at a trimed release(possibly before the RTM and after the final beta) and see the impacts. The bug reporting software should return report on any missing function calls. If Microsoft receive such report, they should judge whether it’s good to put that back depend on how frequent the report is received. (I know this will add additional cost to the development process, but consider the benefit of removing old functions that’s difficult to maintain, I think it’s worth to consider.)

    For whatever functions that have the whateverEx function they could have a redirector that calls the Ex function with some parameter set to some default value.

    (More or less like how Windows use Interrupt redirection to replace work with DOS programs that only knows interrupt but does know APIs)

  26. Anonymous says:

    Some years ago I would have also liked the VM idea, but thinking it a bit more – it doesn’t make that much sense (atleast for a while). Before the VM approach it would be nice if majority of the apps were managed and instead of writing wrappers around win32, MS would go as managed as possible. Given that in LH you can write device drivers in C# it would seem that eventually there will be new set of UMDF type low to high level layers which would allow to do more direct managed – kernel interaction and at some point would allow to replace win32 gradually with more managed system. I believe this is the more sensible approach than the VM approach, cause if we went with VM you’d still need to write all the new low level crud to make managed code work anyway.

    As to "When", I think I answered that. EOL it as you go more managed.

  27. Anonymous says:

    Larry Osterman posted earlier today about Turning the blog around – End of Life issues&amp;nbsp;and I thought…

  28. Anonymous says:

    Provide me a tool I can run against my source or binary that tells me what APIs are deprecated *AND* what I should do instead. The "what I should do instead" would need to cover Win98 onwards since most developers won’t want to disenfranchise that non-trivial portion of the userbase.

    You can then rip out the APIs once the deprecation period has expired, which I would expect to be a few years.

  29. Anonymous says:

    I’d say that two good times to sunset an API are either when (a) you simply cannot implement it, or (b) you can transparently reimplement it on top of another public API.

    NTVDM, for instance, was OK to ditch in x64 Edition because the CPU itself simply doesn’t support VM86 in long mode, and writing a full-blown emulator is unreasonable.

    LZ32 is used less nowadays and can be reimplemented with simple, basic kernel APIs, so I imagine it could be dropped from the core OS and either moved to appcompat and/or re-released as a redistributable. waveIn and waveOut also come to mind, if they can be reimplemented on top of DirectSound, and perhaps even GDI can have this treatment after Longhorn ships.

  30. Anonymous says:

    The worst thing you can do to your developers is pull out pieces that their code depends on in a later version. When you do this, you cause them nothing but pain. If the mechanism you shipped in the past is poor, you have several ways to work around the issue. You can change the implementation and test using some test harness and some applications using the API to make sure that existing code still runs. You can deprecate the old methods and update your documentation to point to the new methods and your headers to annotate the methods with deprecation warnings. You can update the implementation to support existing solutions dependent on the method.

    But if you call your product a platform, and you want developers to develop for it and customers to buy it, then you can do yourself no greater disservice than screwing your developers and your customers by making it unclear when what will work with what. This is a huge reason that Windows XP SP2 wasn’t adopted as rapidly as MS hoped, and even Office 2003 SP1 had to run on XP SP1.

    I’m not saying a platform has to stay the same, I’m just saying that you can’t knock some random APIs out of it and still call it the same platform.

  31. Universalis says:

    It doesn’t matter if you make every single piece of *software* obsolete. Software is almost worthless to end users. What is valuable to end users is data.

    So you can, if you like, happily change Windows so that Word 2.0 won’t work, as long as there is a piece of software around that will open Word 2.0 files and do something rational with them.

    Here, therefore, are some uninteresting cases:

    1. In-house-developed software: there is presumably source code around somewhere so pulling the rug out from under feature will not cause your customer disaster, merely expense.

    2. "Intimate" software. By this I mean software that doesn’t run on top of (or under) Windows but enfolded and entangled with it– typically something that needs to deal with sound or multimedia or low-level device access. In that case you’re relatively safer obsoleting the APIs because this is not **application** software in the normal sense and probably doesn’t have much in the way of unique data associated with it.

    For all other cases, the simple rule is that you support the API for ever. Period.

    Remember that in the general case the user has no access to the source code (and wouldn’t know what to do with it if he did) and the company that made the original program is long since defunct.

    On the whole, I must say that MS hasn’t been too bad about this so far. We get occasional upgrade requests from people who are using MSDOS versions of our software from the early 1990s: typically they have moved to a new laptop and suddenly find that Windows XP’s MSDOS emulation is bagadap. (I’m not sure why, mine works perfectly). Fortunately we’re not a dead company… otherwise the problem would have been serious for the user because databases are typically worth more than the computers they run on.

    Now, however – if, with .NET, you say that certain features are marked "Obsolete" and will be removed soon… I have no problem with that. But do you undertake, definitely, never to do that again? Or will you decide, in 2008, that some more features need to be obsoleted? In that case no-one with any sense will touch .NET because the risk is too great.

    You have to address this problem because software is far longer-lived than anyone ever expected when this industry started. You may happily "end-of-life" your own operating systems, but that is allowable because you normally release a successor OS. You *cannot* arbitrarily "end-of-life" the application software on which people’s businesses depend.

  32. Anonymous says:

    I think you should have two strategies for deprecated subsystems and deprecated APIs in the current subsystem.

    Deprecated subsystems (e.g. DOS, Win16) could run in a VM like environment. The loss of Win16 is imho a big defect of Win x64 (yes I know the reason is the lack of hw support) and a VM strategy can be a winner here.

    For deprecated APIs in current subsystems (Win32 and Win64) we can have a different strategy. Let’s have a protected registry tree (or something similar) which contains a list of deprecated functions and a DLL in which they are stored. The OS will simply load the DLL which is a stub for the API or a wrapper on the new API.

    For example let’s say MessageBoxA gets substituted by a new MessageBoxHTML which uses HTML to format its message. In the registry we’ll have something like MessageBoxA : OLDAPI32.DLL. The OS intercepts an unresolved call to a MessageBoxA which is in USER32.DLL (or wherever it is), loads OLDAPI32.DLL on the fly and then call the OLDAPI32.DLL implementation which either does nothing, creates a dialog window with the message or uses the new MessageBoxHTML to show its message.

    The wrappers could be quite complex. For example a wrapper which wraps WINMM.DLL wave functions over DirectSound ones could be pretty much complex and not 100% compatible. However often 98% compatible is enough. Eventually the wrappers could be configurable on a process basis (taking the WinMM->DS example we can have the "no sound" option which has 100% compatibility but doesn’t output any sound and the "full emulation" which has some corner cases of incompatibility but does its work).

    Sorry for the long post.

  33. Anonymous says:

    Do it like the Linux kernel does. It becomes obsolete in the next version, and gets removed in the one after. That way everybody gets time to upgrade their application to the new API and warnings get logged in the system log if the application tries to use the obsolete API. This way the program keep working, but the developers are encouraged to upgrade to the new API.

    Unfortunately this works well in a zero-cost (free beer) environment where everybody can upgrade without permission from above (re: budget). The proprietary model encourages people to keep using old versions of the software because it’s cheap. OS vendors that charge for their OS should release cut-down versions that don’t have the extra functionality enabled (and are not as resource hungry), but work with the new API as a free upgrade. This way you could also provide the essentials like patches for free.

  34. Anonymous says:

    Wow, good question. Personally my opinion is that with the move to 64 bit Windows, breaking the 16 bit Windows compatibility stuff is a pretty good plan. You may find you’ll need to put it back after user complaints but at the moment the user base is pretty bleeding edge and expect problems. I suspect old favourite games that are no longer produced are the most likely candidates.

    The jewel in the MS crown compared to both Apple & the OSS crowd is that you really, really, really try not to break anyones application[1] on a new version of Windows.

    Apple keep screwing over both their developers and users over the upgrade issue.

    The OSS guys seem[2] to have all these problems with having to pull down 20 different versions of the same library to get something to compile so they can actually run the appliation.

    Keeping DOS compatibility is however still important, we still have users running stuff in a DOS window, they don’t want a WIMP they just want a keyboard in a protective cover. That said supporting DOS longterm can’t be that stressful anyway, most of the file system stuff (8.3 support) would need to be done for servers anyway so as not to break clients. And DOS doesn’t actually do a lot else, supporting the rest of the hardware via emulation[3] is certainly an option today.

    What I like is that code that I wrote back in 1991 still works today on todays OS.

    [1] Certain classes of application like disk utilities, backup and virus scanners etc I’d expect to break.

    [2] I’m not an OSS guy and I am only going on what I’ve seen and experienced occasionally with GPL stuff on Windows. Backwards compatibility doesn’t seem to be high on their list. I suppose the key question is could I take an executable for say XTrek written for Linux 1.x and run it unchanged on 2.x?

    [3] Some means of allowing "device drivers" to break out of emulation is required, we still have code that calls a database via int 07bh in a XP DOS window, which has a device driver loaded which then breaks out back to a Win32 database engine.

  35. Wound says:

    Good question, and one that would be moot if the OS license allowed users who had a licence for the current OS to dual boot or run the old OS in a VM for free.

    In some senses depreciating API’s would be a good thing, at least for 3rd party developers, because it would stimulate the market for new software, rather than allowing old software to continue for ever. Maybe that’s tough on customers but MS isn’t the only software company in the world that makes money by selling a newer and better (or just prettier) software. Of course if you do it too often or too soon you risk alienating large numbers of customers, but if you announch the EOLing of APIs as you do with OSs several years before you do, and publish a list of affected software, that should be OK.

  36. Anonymous says:

    My vote is on major releases only. You publish a list of what is disappearing and what to do about it, like if there is a replacement. I think everyone can understand that. So, Longhorn is a major … but perhaps even XP-SP2.

  37. Anonymous says:

    A lot of the comments here are looking at this from the point of view of somebody with existing applications upgrading to a newer version of the OS. It’s an an important considertation, but it’s not the one I am faced with day to day.

    As developers, we rarely get to choose which platforms we target. The market determines that. I and many other Windows developers I know are building *new* applications, but the market demands that they run on older platforms as well as the "current" ones. A non-trivial number of people still run Windows 98 and NT 4, especially as you look at the international market.

    The users of an obsolete API are not necessarily applications that are five or ten years old. They may be brand new applications that rely on an API that was been superceded long ago because it’s available across all of the platforms that must be supported.

    Sure, sometimes your application will dynamically use a newer, better API if it’s available or fall back to the old one if it’s not. But when the old API is sufficient and universal, it will often be used rather than the "current" one, even in sparkling new code.

    Consider Avalon, a whole new presentation layer. How long will it take for it to completely replace GDI? If I’m writing a new general-purpose application today, I *have* to use GDI. Even if I were targeting a release date that coincides with Longhorn, I couldn’t afford to ignore Windows XP, Windows 2000, Windows NT 4.0, Windows Me, and Windows 98. (In reality, we even ensure that the basic functionality of our app works even in Windows 95 and NT 3.1.) If I had unlimited resources, I *might* try to develop a parallel GDI/Avalon version. But when is the last time you were on a development project with unlimited resources?

    I don’t have a general answer to Larry’s question, but using GDI as an example, I’d say you could retire it when Longhorn (or perhaps XP with Avalon extensions) is used about as much as Windows 98 is used today. That’s probably four releases after Longhorn, or more than a decade away.

  38. Anonymous says:

    I say if you give enough notice, say 5 years, and offer deprecated support through those years, no one can hold you accountable.

    Brett

    "OMG!!1 I can’t run logo on my new 64-bit dual-core machine running Longhorn?! How will I survive without that fantastic little turtle?!

  39. Anonymous says:

    Larry as to the VM & AppCompat layer, I believe Aaron Reynolds and Pierre-Yves Santerre looked into some of this during the early WinXP time frame.

  40. Anonymous says:

    I had an epiphany one day when I realized that if I wasn’t careful, my current crowning achievements were going to be my future defining boundaries. I think that happened for you guys with Win3.1.

    Think about how much crap has to stay in XP and future OSes because they’re forced to support what might have initially been a great idea but now is just dead weight. It hurts performance and it hurts the purity–as much as you can say that in a commercial OS–of the OS. Fortunately, Moore’s law has saved you for a while but eventually that will slow down and then software engineers will be _forced_ to implement better long-term strategies.

    Tremendous insight is tremendously lacking. (e.g., Who might have ever considered that a .com file extension might one day be confused with a .com TLD and be used to propogate trojan horses via innocent-looking email attachments?) People who can see that far down the road should be treasured and heeded. Usually, they’re not.

    And unfortunately, the draw of many real-world concerns and (for Microsoft) quarterly results skews a lot of the ideals we would all like to achieve.

  41. Anonymous says:

    I’m going to take a different tack than most people have when responding: it’s not the case that the new APIs are better. In many cases with multimedia, the old APIs are simpler and better documented; the new APIs provide no obvious improvements for any of the cases I run into but require much more code.

    Example: I wanted to pull the bits out of my normal, mainstream webcam. There are a host of "new" APIs — the windows image architecture, the still image architecture, a lot of stuff with "streaming" in the apis. Not one had an obvious-after-half-an-hour-of-looking way to pull the image bytes out of my camera’s video stream. What actually worked, first time, with the least coding was the multimedia apis that are marked deprecated.

    Peter

  42. Anonymous says:

    1. If old APIs aren’t supposed to remain forever, then why are we calling Windows APIs in current code? Why aren’t we calling NT native APIs?

    2. Remember how Microsoft got started on the track it’s on now? Microsoft’s manuals boasted that they were more closely compatible with CP/M than even the maker of CP/M was, because CP/M-86 broke compatibility with CP/M.

    3. Thursday, May 12, 2005 2:57 AM by Universalis:

    > It doesn’t matter if you make every single

    > piece of *software* obsolete. Software is

    > almost worthless to end users. What is

    > valuable to end users is data.

    90% true but that doesn’t mean the other 10% is irrelevant. Would you prefer for every plane to crash but have the data from the black boxes preserved safely, or would you like for 99.9999% of planes to operate correctly? Sure no one in their right minds would use Windows in a car or airplane, but some parts manufacturers use it in controlling test equipment to check whether their components operate correctly.

    4. It’s really neat (not) to continue adding features to a VB6 program that can’t be converted to VB.Net because there’s no working migration wizard, and all we have to do is rewrite a few hundred thousand lines from scratch in the new language instead (well we don’t have to, we keep in in VB6). Same with APIs. They can’t go away until a migration wizard starts working.

  43. Anonymous says:

    << Microsoft’s manuals boasted that they were more closely compatible with CP/M than even the maker of CP/M was, because CP/M-86 broke compatibility with CP/M. >>

    Whether that’s true or not is debatable, though. The PSP in MS-DOS is more backwardly compatible than the Zero Page in CP/M-86, but several system calls went missing or changed semantics.