When somebody gives you a gift of code, it’s more often than not a burden in disguise


Why doesn't Microsoft bundle third-party programs?

Yes, it has been done in the past, but the results were hardly a slam dunk success.

Who ports the software to 64-bit Windows? (Answer: Me, personally. I spent a good chunk of the year 2000 porting millions of lines of code to 64-bit Windows. Just for fun, I did a wc -l on a couple of the "gifts" that I ported. Over 100,000 lines of code in one of them, and over 50,000 in another.)

Who fixes the security holes in the software? (Answer: Me and people I work with.)

Who has to pay all the software taxes that the original software vendor failed to address? (Answer: Me and people I work with.)

Who has to update the program to keep up with new Windows design guidelines? (Answer: Me and people I work with.)

Who has to make the program localizable? (Answer: Microsoft.)

Who has to translate the software into 20+ languages? (Answer: Microsoft.)

Who gets sued if there is a patent violation in the software? (Answer: Microsoft.)

Will people realize that the bundled tools are included merely as a courtesy and that applications should not rely on their continued presence? (Answer: No.)

I know this from personal experience. When somebody gives you a gift of code, it's more often than not a burden in disguise.

And that's not counting the legal and public relations challenges some commenters in the linked article have raised. I mean, heck, Windows Vista simply included some photographs from the user community and look at all the anger that generated, both in the comments and elsewhere. And those were just photos!

(Some people might notice that this is the flip side of free code samples.)

Comments (43)
  1. SRS says:

    Isn’t ‘wc -l’ a Unix command?

  2. Wang-Lo says:

    "When somebody gives you a gift of code, it’s more often than not a burden in disguise"

    I have often found the same thing to be true when somebody gives me the gift of a free operating system preloaded on a new computer.

    (I refer, of course, to such failed efforts as CDC/SCOPE, UNIVAC/RTB, Burroughs/MCP, etc.  What did you think I meant?)

    -Wang-Lo.

  3. brian says:

    You cant win.

  4. JamesNT says:

    I have found this to be the case with almost all free software – even open source.  

    With Windows and other software I use, if there is a problem I just contact the vendor and chances are the issue gets fixed in the next release or in some update.

    But with free software the usual thing I hear is "You have the source, fix it yourself!"

    Nevermind the 4 or 5 other programming projects I am working on at the time that I’m actually getting paid to work on.

    I’d rather just buy what I need and avoid the hassle of trying to find it for free.  And if someone offers me free code, I almost always turn them down.

    Hassle = more money in wasted time.

    JamesNT

  5. Yuhong Bao says:

    Scroll to the end of this for an attempt to bundle a graphing calculator app with Windows:

    http://www.pacifict.com/Story/

    BTW, what was the average quality of the third party apps bundled?

  6. anonymous says:

    Isn’t ‘wc -l’ a Unix command?

    No, it’s a command from a set of POSIX compatible shell tools. Windows Services for Unix, LUA Warehouse, Cygwin, as well as native ports like UnxUtils and GnuWin32 bring the ‘wc’ tool to the Windows world.

  7. Leo Davidson says:

    If the 3rd party components were included in Windows because Microsoft felt their features were needed then shouldn’t you compare being given exiting apps and source with having to write brand new apps in-house?

    If the app was written in-house then, on top of the burden of designing and writing it from scratch, most of your points would still apply.

    Someone at Microsoft would have had to port it to 64-bit, fix security flaws as they’re found, fix localisation problems, and so on.

    So is this really just an argument against bundling apps with the OS, regardless of who wrote them? Or is your point that apps from 3rd parties tend to be lower quality and written without thought for security, localisation, and so on?

    [If the app was written in-house, then it would already have been ported to 64-bit, be localization-ready, undergone a security review, etc. Because those are basic ground rules for coding at Microsoft. -Raymond]
  8. Mark says:

    Hmmm, Apple bundles loads of stuff with OS X and I’ve never heard them refer to it as a "burden".

  9. Yuhong Bao says:

    Hmmm, Apple bundles loads of stuff with OS X and I’ve never heard them refer to it as a "burden".

    But most of them was written by Apple themselves.

  10. JenK says:

    Mark – How many Apple devs do you know who blog about work?

  11. Mark says:

    But most of them was written by Apple themselves.

    Huh?  Python, Ruby, Rails, Apache, etc etc?  Or are we only talking about "applications"?

  12. wtroost says:

    How are developers supposed to know what tools are “merely a courtesy”?  Perhaps they’re just doing what seems right.  It’s a compliment to Microsoft that developers rely on them so much! :)

    [How about “Is the API documented on MSDN?” for starters. Nearly all tools are just courtesy tools and are not part of the Windows API contract. (Occasionally, a tool is part of the contract, such as control.exe.) -Raymond]
  13. Yuhong Bao says:

    You don’t just call random functions inside random DLLs in the Windows directory, just like how you don’t just drink random liquids inside a science lab.

  14. Mihai says:

    I understand the problem form your side.

    But from an external developer side, it is often not visible if something is 3rd party or not, so that you do "not rely on their continued presence"

    The logic is "this is by default part of Win since Win X, in the About dialog it says MS, so it should be"

  15. Mihai says:

    Since I have started writing my post I see there is some kind of answer: "How about "Is the API documented on MSDN?" for starters"

    Counter-examples

    • Imaging. Was there, some API was documented, but is not in XP anymore (and there is no obvious replacement for that API).

    • Backup: was there and working since NT 4 (at least). Not in Vista anymore (the only thing is a chopped-down version that cannot backup individual folders or FAT32 drives). But that was MS, I think.

  16. Ulric says:

    Agreed with your article.

    One side thing I wonder though; isn’t this a huge waste of time to port everything to 64-bit?  I mean… why do it?

    I know posters above annoyingly brought up OSX and we’re all sick of this.  But I can’t make sense of all of this porting to 64-bit.  It isn’t done on OSX, and it makes life easier for all user.  

    OS X Finder is 32-bit, all the apps are 32-bit.  Only the bits that benifit from 64-bit are ported.  I’m on Vista 64-bit, and I have to explain the damn the 64-bit vs 32-bit to my co-workers coming to 64-bit, it really complicates Windows needlessly. And of course it creates a second binary to test for you guys.

    It’s not like 32-bit support is going to be removed tomorrow.  In the mean time, the users have to be ‘aware’ of this technical detail, with many simple things, like shell extensions not working, or plug-ins in the 64-bit browser.

  17. Mark says:

    I know posters above annoyingly brought up OSX and we’re all sick of this.

    My apologies, I didn’t realize that we weren’t supposed to mention it.

  18. Homer Simpson says:

    "just like how you don’t just drink random liquids inside a science lab"

    Mmmmm… random liquids

  19. Yuhong Bao says:

    Ulric: Here is a little bit of history about this.

    64-bit Windows was originally for Itanium.

    On that platform, there was a big performance hit for running 32-bit apps because of all the x86 emulation that have to be done.

    Thus, MS decided not to allow 32-bit code in kernel mode and to port almost everything to 64-bit.

    On the other hand, the 64-bit functionality in Tiger versions of Mac OS X was originally for PowerPC 970 (G5), where that were in fact a possible performance hit for moving to 64-bit.

    Why, because of the larger pointers and code size. So the kernel and most apps stayed 32-bit, in fact in Tiger only command-line apps could be 64-bit. Leopard allowed GUI apps to be 64-bit, though most of them were still 32-bit only.

    On x64, there are a little bit of performance improvement when executing recompiled x64 programs due to the additional registers and less code size increase compared to PowerPC. So it is somewhere in the middle. BTW, what about AXP64?

    Python, Ruby, Rails, Apache, etc etc?

    Open Source does this in a different way that has none of the problems described.

  20. Yuhong Bao says:

    the kernel

    Most of it, that is. See the xnu source code for how Apple basically mixed 32-bit and 64-bit code in the kernel. Clue: Look at the ENTER_64BIT_MODE and ENTER_COMPAT_MODE macros. BTW, I wonder how does Parallels and VMware execute the Intel VT instructions on Mac OS X (VMXON, etc…) when they can’t even be executed in compatiblity mode. But that of course is a different matter that probably should not be discussed here.

  21. steveg says:

    Ulric said: this a huge waste of time to port everything to 64-bit?  I mean… why do it?

    Because 4GB isn’t enough. And it’s even worse because your 2 * 1GB video cards get mapped over 2GB of your RAM. When next-gen video cards hit 2GB+ there will be no address space left for RAM. http://www.dansdata.com/askdan00015.htm

    Although if I misunderstood the question and it was "why port little apps like Calc.exe to 64 bit". I can think of a couple of reasons: a) why not. b) good practice for staff and process c) stops bad publicity ("but it says 64bit on the box").

    Some very popular applications are now 64bit only (such as Exchange although whether that was a marketing, budget or technical decision I don’t know).

  22. Yuhong Bao says:

    >Because 4GB isn’t enough. And it’s even worse because your 2 * 1GB video cards get mapped over 2GB of your RAM. When next-gen video cards hit 2GB+ there will be no address space left for RAM. http://www.dansdata.com/askdan00015.htm

    PAE can solve this problem without moving to 64-bit, plus most video cards do not map all of the memory at once. Sadly, client versions of Windows act as if the /PAE switch is not specified, even if it is specified. The reasons for doing the compatiblity hack to limit physical address space to 4 GB is understandable.

    [Remainder of comment deleted because it violates multiple rules. -Raymond]
  23. Ulric says:

    steveg:

    >to port everything to 64-bit?  I mean… why do it?

    Because 4GB isn’t enough.

    I can see as argued above porting everything to 64-bit in the context of the Itanium.  In the very long term (since Itanium wasn’t going to the desktop any time soon)  

    But on x86, not for that reason above.

    32-bit applications each have 3 gig of address space addressable individually

    (if they are flagged with /largeaddressaware, and most that need the RAM are ).  

    Those 32-bit apps already get access to 1 more gig of RAM by virtue of running on Windows 64-bit.  You can run Photoshop, Illustrator, AfterFX, each of these will have access to 3 gig of RAM of your total 64-bit memory.  BOOM! that’s an awesome performance boost and no code change and re-testing was required.

    But most apps don’t even use that of course.  The apps that benefit from 64-bit are multimedia or server apps and they are the only ones to port.  Maya, 3D studio max are already 64-bit, Adobe apps will be coming soon.  In the mean time, they are flagged /largeaddressaware

    Right now tons are people are moving to 64-bit Vista.  (They don’t know why, they just want it on their new machine.)   This mainstream audience could have been spared separate Program Files folders and having to know the difference between 32-bit and 64-bit apps and process.

    They’ve been spared that on OS X.  Users don’t feel cheated that bundle apps or the shell aren’t 64-bit.   When there will eventually be a 64-bit port of something, it will be hidden in a bundle, as opposed to dual "Program Files" folders or different shortcuts.

    Port "because we can" as you suggest (steveg)  just doubles the QA required on these unnecessary binaries, it’s irrational.  

    I’m surprised all this extra work wasn’t axed during Vista as there are no practical benefits.  

    It’s irksome on our side as users.  For example because Explorer is 64-bit and the shell extensions we’re used to (simple things like Zip/Rar context menu, source control, etc) don’t work.  There is nothing gained.

    "just in case", Vista 64-bit had to come with two copies explorer, one 32-bit and 64-bit.  

    Why ship (for example) explorer 64-bit at all?  It means extra development, extra testing, and on our side extra explaining to users why things sometimes don’t work.  Computers should be getting more simple not harder.

    There is also a 32-bit and 64-bit IE, of course.   No one will ever use 64-bit IE, and it just confuses users about the nature 64-bit and make them feel disappointed that there are no plug-ins.  That should have been axed, too.  The inmates are running the asylum on this one.

    With this 32-bit and 64-bit binary duality right there in the UI, Vista 64-bit seems to still be  a programmer’s pet project, but the mainstream adoption of 64-bit OS is really starting this year.

  24. Drak says:

    Hmm I tried Vista 64 bit. For a week. No, or beta, drivers for most of my (very recent) hardware. My homebuilt services failed. So I went back to 32 bit Vista. Much better support.

  25. Pavel Lebedinsky says:

    No one will ever use 64-bit IE…

    … no plug-ins …

    I use 64 bit IE all the time. No plug-ins == better security and performance. When I really need flash or acrobat, I switch to 32 bit IE.

  26. Sam says:

    How would a 3rd party know in advance about the latest Windows design guidelines? If Microsoft wants to alter the guidelines and carry an application over to a new Windows release, it’s their own job to make sure the guidelines are met.

    But it’s a matter of cost/benefit. If there’s a demand for feature X and there’s already a 32/64 bit, logo certified implementation available, it could be cheaper to license it. If it would cost more to fix a wonky implementation than to write it from scratch, doing so makes sense. Weigh in the overall benefit of the new feature (like increase in sales, improved productivity) and set a budget accordingly.

  27. Anno Nymus says:

    Who adopted 3rd party utilities that should have been part of the OS from the start (and sometimes bought the whole company)? (Answer: Microsoft.)

    Who has a leading position in the desktop operating systems market, to a big extent because they support a lot of devices and bundle (3rd party) utility programs? (Answer: Microsoft.)

    Yes, it may be a burden. But would it really be better if the 3rd party software had been developed in-house? Honestly?

    If for MS – and you and people you work with – it is a burden to maintain ("formerly") 3rd party software, I’d suggest to "enhance" the OS to only execute software that passes WLK.

    And to avoid being sued for unauthorized use, just add the feature to only playback/display DRM-protected and centrally cleared media (yes, even system sounds and wallpapers).

    …once all this is implemented, I’ll just sit back, relax, and watch the fireworks… ;-)

  28. Neil says:

    "but it says 64bit on the box"

    I think Windows 95 said 32bit on the box, but msgsrv32.exe, the "Windows 32-bit VxD Message Server", was, naturally, a 16-bit app ;-)

    And with Resource Meter they couldn’t win – the shell icon calls were only 32-bit, but the resource API calls were only 16-bit.

    I still use 16-bit software.

  29. Aaargh! says:

    My apologies, I didn’t realize that we weren’t supposed to mention it.

    It makes MSFTies feel insecure about themselves.

    They’ve been spared that on OS X.  Users don’t feel cheated that bundle apps or the shell aren’t 64-bit.   When there will eventually be a 64-bit port of something, it will be hidden in a bundle, as opposed to dual "Program Files" folders or different shortcuts.

    In OS X different versions of an application are stored in the same executable file (fat-binary) if you do a ‘file’ on the actual executable you’ll see at least an i386 and a PPC version.

    Bundles are used to bundle an application with its resources, metadata, filetype associations, etc. but they are not needed to ‘hide’ different executables, there is one executable and it can contain code for multiple architectures.

    In fact, from leopard on, there is only one version of OS X for x86, PPC and x86-64bit. Also, 32bit drivers work on the 64bit version. Joe User doesn’t need to concern himself with this whole 32/64 bit stuff.

    Why microsoft chose to have two separate versions of windows for 32 and 64 bit is beyond me. It needlessly complicates stuff. If a relatively small company like Apple can make one DVD that installs on 2 different architectures and on 32 and 64 bit systems, I don’t see a reason why MS couldn’t have done the same. Maybe it’s a licensing thing.

  30. Mike Dimmick says:

    @steveg: Moving Exchange 2007 to 64-bit was to get around Exchange Store’s long-standing virtual memory issues, which led to a demand that you booted with the /3GB switch if you had as little as 1GB installed. Fundamentally ESE, which Exchange Store is built on top of, doesn’t operate by buffering database content directly into memory as SQL Server does, leading to a lot of problems due to fragmentation in the memory allocator. The main problem here is that the block size allocated is highly variable so no amount of keeping special lookaside lists helps.

    Going 64-bit alleviates these issues by giving it a lot more space to mess about in. In addition, some installations with multigigabyte databases where the users are hitting it very regularly will benefit from the ability to buffer much more space.

    Because of the consequences of /3GB, shrinking the system address space to 1GB, you were told not to boot with /PAE before Windows Server 2003 SP1 (because that doubles the width of all the page table entries). That OS introduced the use of the NX bit, which requires PAE mode anyway. You’re still advised not to fit more than 4GB of RAM into an Exchange Server 2003 server, as the data structures required to manage the extra memory, and I think some other allocations which vary proportionally with physical RAM fitted, suck up too much of the system address space available and reduce the number of concurrent connections that can be made.

  31. Austin Wise says:

    Why microsoft chose to have two separate versions of windows for 32 and 64 bit is beyond me.

    Since OS X came out in 1999 while the first version of Windows NT came out in 1993, the improvements in hardware might have made Apple’s solution viable.  There also was that much more knowledge on the construction of operating systems.

    For what it’s worth, .NET apps are generally compiled to be instruction set agnostic (/platform on csc.exe defaults to anycpu).

  32. Christopher says:

    If Windows had some kind of translation going on to allow 32-bit drivers, I’m sure that it would mean that almost no-one would bother writing 64-bit drivers. Apple have a pretty tight grip on their ecosystem and can and do regularly announce that support for old technologies is being dropped as of now, but that doesn’t fly with enterprise customers who are so important to Microsoft. Microsoft don’t want to have to support some major driver memory virtualisation workaround for years to come?

    It may be similar to the Win16 problem. I’m sure that 64-bit Windows could have come with an emulator for running 16-bit programs (after all, Mac OS X on x86 came with one for PPC programs). But this would have been a lot of work for Microsoft and give no incentive for anyone to upgrade from their old Win16 programs which Raymond has mentioned being a problem for killing off the API, and they’d still be worrying about 16-bit compatibility subsystems in another 20 years’ time.

  33. Mike Dimmick says:

    @Aaargh: Why doesn’t Microsoft support 32-bit drivers?

    Fundamentally a lot of 32-bit drivers don’t support PAE mode correctly. They can’t be presented with a 64-bit physical address, even though that’s been supported on the PC platform since the Pentium Pro in 1995.

    Even if they do support it, Windows would have to wrap all calls to a 32-bit driver to ensure that the driver was only ever presented with 32-bit virtual addresses. That means remapping any user data buffers into the right address range, which would make the 4GB large address space feature for 32-bit programs impossible – it would have to be capped at 3GB to allow the region from C000’0000 to FFFF’FFFF to be used for remapping. This remapping would have to be done dynamically and Windows would need to know which parameters were pointers, which for custom I/O control codes is not known. (64-bit drivers are responsible for translating 32-bit versions of their custom I/O control structures – see http://msdn2.microsoft.com/en-us/library/aa489577.aspx.)

    32-bit drivers could also be using code that’s banned from 64-bit kernel mode, like x87 FPU code. It’s banned so that Windows doesn’t have to dump the FPU stack, a very costly operation, when switching away from a thread currently running in kernel mode, and doesn’t have to restore it when switching to one.

    So even if we were willing to live with the performance issues created by use of 32-bit drivers, there are some intractable problems preventing it.

    Basically Microsoft have treated 64-bit Windows as a chance to fix some of the issues causing performance problems on 32-bit Windows, which were (often unintended) consequences of the design back in the early 90s.

    Apple haven’t had this problem as they’re dealing with: many fewer drivers, on a system that’s far newer (2001 in its PowerPC incarnation, 2006 for x86) and with the expectation from the beginning that it would run on 64-bit, so with fewer of the issues that cause problems for MS. Also, most of the drivers come from Apple themselves.

  34. Mike Dimmick says:

    @Austin: with regard to .NET, that’s because the actual code executed by the machine is compiled at runtime by the JIT compiler.

    If you compare the output between compiling with AnyCPU, x86, x64 or even Itanium, you’ll see that the MSIL bytecode is identical. The only different is in the CLR header and, if you’ve selected x64 or Itanium, in the PE header. The PE header tells Windows to create a 32-bit or 64-bit process (if it’s an EXE); the CLR header tells the CLR whether this assembly can be loaded into the process you’re trying to load it into.

    Assemblies marked AnyCPU can load into either 32-bit or 64-bit processes, and IIRC Windows will create a 64-bit process on a 64-bit OS. Assemblies marked x86 can only load in a 32-bit process. Assemblies marked x64 or IPF can only load into a 64-bit process on the appropriate 64-bit OS. There are multiple GAC directories, one for each CPU type, so that you can have assemblies differing only by CPU type installed on the same system. (You need a command prompt to view them, as there’s a shell extension for Explorer which takes over when you navigate to C:WindowsAssembly [or wherever your Windows folder is], but if you issue a ‘dir’ command you’ll see GAC, GAC_32 and GAC_MSIL folders if you have .NET 2.0 installed on a 32-bit system, and a GAC_64 folder as well on x64.)

    This marking feature is there so that if you have some compatibility issue – perhaps using a sized integer rather than the variable-width IntPtr, or casting an IntPtr, or simply an API that doesn’t exist on the 64-bit or 32-bit platform or is implemented differently – you can ensure that this is dealt with early and not with some exception or error at the call site.

  35. Gabe says:

    Aaargh!: I don’t see how bundling all the architectures together helps anything. Sure, it means you only have one file and don’t have to care about what’s in it. But then that file is 2 or 4 times the size of what you would ordinarily have to download, and there’s no guarantee that the file has what you need to run.

    Let’s say you have a browser that supports both 32-bit and 64-bit. When you open it on a 64-bit machine, it will run the 64-bit version of the code, right? So what happens when it goes to load a plugin that only has a 32-bit version in its bundle?

    With Windows, each version is a separate file so I can easily choose to run the 32-bit version when I want to use 32-bit plugins. If they were all in the same file, I would either be stuck with whatever version the OS gave me or I would have to use some more complicated method of choosing which to use.

    If OS X users don’t have to concern themselves with these issues, it must be because there are simply no 64-bit apps that use 3rd-party libraries yet. Clearly porting to 64-bit is as nontrivial as Raymond suggests, or OS X would be all 64-bit by now.

  36. Yuhong Bao says:

    "Let’s say you have a browser that supports both 32-bit and 64-bit. When you open it on a 64-bit machine, it will run the 64-bit version of the code, right? So what happens when it goes to load a plugin that only has a 32-bit version in its bundle?"

    More common however are PowerPC only plugins that has not been ported to Intel, which is why Get Info have an "Open in Rosetta" option.

    "Clearly porting to 64-bit is as nontrivial as Raymond suggests, or OS X would be all 64-bit by now."

    Yep, as I said, in Tiger only command-line apps could be 64-bit.

    Leopard allows GUI apps to be 64-bit.

  37. Yuhong Bao says:

    And for that matter, who ports the software to Alpha, MIPS, and PowerPC? (Yep, Microsoft)

    Because most of these examples happened in the Win95 days and thus had to be incorporated into NT 4.

  38. Wizou says:

    IIRC, MSDN used to document QuickView API in order for developpers to make their own type viewer

  39. 32BitRules says:

    Because 4GB isn’t enough. And it’s even worse because your 2 * 1GB video cards get mapped over 2GB of your RAM. When next-gen video cards hit 2GB+ there will be no address space left for RAM. http://www.dansdata.com/askdan00015.htm

    Incorrect. "Dan" just happens to have a graphics card with the same amount of memory as the apature size. The 64MB card in my work computer also maps 256MB. If "Dan" upgraded to a 512MB or 1GB card then it would still map 256MB.

    GRAPHICS CARDS DO NOT MAP THEIR ENTIRE VRAM INTO THE PHYSICAL ADDRESS SPACE (unless they have <=256MB).

  40. David Walker says:

    Mike Dimmick:  Re Exchange Server, I always wondered if Exchange Server should or could be rewritten using SQL Server as its data store.  It would be a big undertaking, with possibly big benefits.

    At the 10 and 15-person companies that I consult for, when a couple of them have asked if they should run this fancy thing called Exchange server that they have heard of in passing, I say NO!  Let your ISP handle e-mail, even between users in the same company, and use some other solution for shared calendaring and conference room scheduling and the other things that Exchange can do.

    Now, this doesn’t apply to 200-person companies or 1000-person companies or really big companies.  But Exchange Server is a "whole ‘nother world".

  41. Igor Levicki says:

    >GRAPHICS CARDS DO NOT MAP THEIR ENTIRE VRAM INTO THE PHYSICAL ADDRESS SPACE (unless they have <=256MB).<<

    Really?!?

    How come that the Windows XP x64 with 4GB RAM shows 3.25 GB of physical memory in system properties when you have XYZ card with 768 MB of VRAM installed? Coincidence?

  42. Igor Levicki says:

    I meant to say Windows XP above.

  43. Myria says:

    NT 64 can’t have 32-bit drivers due to some of the core design elements of NT.  The most important one is that the user-mode address space is directly accessible to kernel mode.  Much kernel-mode code assumes that it can access the current user thread’s memory space directly, within appropriate guidelines.  This obviously doesn’t work for 32-bit drivers when addresses may be larger than 2^32.

    Darwin, in contrast, reloads the entire page table on a system call, so it has an opportunity to do thunking between a 64-bit user memory space and the 32-bit kernel.  Apple’s old design decision works out well for usability, but they get nailed on performance.

Comments are closed.