What would be the point of creating a product that can’t do its job?

Yuhong Bao for some reason lamented that there is no 32-bit version of Windows Server 2008 R2.

Well, duh.

Why would anybody want a 32-bit version of a server product? You would run into address space limitations right off the bat. you couldn't use it as an Exchange server, your Terminal Server couldn't support more than 100 or so users, your file server disk cache can't get more than 2GB (and probably much less), your SQL Server will be forced into AWE mode and even then, AWE memory is used only for database page caches, not for anything else.

Basically, a 32-bit server would be pretty much useless for anything it would be asked to do in its mission as a server.

(Device driver compatibility is a much less significant issue for servers, because servers rarely run on exotic hardware. Indeed, servers typically run on the most boring hardware imaginable and explicitly run the lamest video driver available. You don't want to take the risk that a fancy video card's fancy video driver is going to have a bug that crashes your server, and besides, nobody is sitting at the server console anyway—all the administration is done remotely.)

Comments (57)
  1. Antonio 'Grijan' says:

    Applying the rule of thumb that it only makes sense to run a 64-bit OS in a machine with 4 GB or more of RAM, it seems users asking for 32 bit versions of Windows Server expect them to run on a machine with 2 or 3 GB (if not less). What would it be useful for, I don't know. But my guess is that not for much.

  2. Joshua says:

    It was an annoyance to me for years. Lots of our tools just didn't like 64 bit so I had to use workstation SKUs. We know we *can* shear off the stops (1 user and 5 fs clients) but it's legally dodgy.

  3. Antonio 'Grijan' says:

    @Joshua: if you control the tools, isn't it easier to fix the compatibility issues? 64 bit OSes are here to stay, and nowadays, you can't buy a computer with less than 4 GB of RAM (except for tablets and convertibles/netbooks, but I don't think you will be using them as servers…).

  4. anonymouscommenter says:

    There was a time where some processors in the Intel Atom line were 32-bit only.  Not exactly server grade hardware (or very powerful), but the fanless industrial type designs were handy for certain applications.  If you wanted a Windows Server OS, you were stuck with 2008.  

  5. anonymouscommenter says:

    I had to wait on a vendor. Besides we were still running on all this recycled hardware.

  6. anonymouscommenter says:

    Actually you can hundreds of 32-bit servers, all with less than 2GB memory, think about virtualization.

  7. anonymouscommenter says:

    Unless the server is a print server. The lack of 64 bit drivers for legacy hardware is quite real. The universal printer drivers don't work correctly, or worse hang when Word takes 1 minute to view a single  document in Print Layout.

  8. anonymouscommenter says:

    Microsoft sold Windows Server 2000/2003/2008 for x86 too.

    These had the same limitations you describe, but nevertheless your company produced them.


    And your argument with the drivers is lame: both Windows 2003 and 2008 have PAE enabled per default, so the drivers had support 64-bit addresses.

    OTOH PAE allows to use 64GB RAM, which is but only supported on Windows Server 2003 Enterprise Edition.

  9. anonymouscommenter says:

    So, why isn't there a 32-bit version of Exchange? It's only a mail server. Why would that, of all things, absolutely need a boatload of RAM?

    As for Terminal Server, "In a Terminal Service environment, this means that a 2 GB address space is shared by all of the processes that are running on the server" – uh, WTF? Each "process" doesn't get its own address space? If nothing else, isn't that a huge security issue, or does Windows TS implement its own page-protection mechanisms to prevent them reading/writing each other's memory?

    As for disk caches, according to Wikipedia, some editions of Windows server 32-bit can access up to 64Gb of RAM using PAE. Given that the disk cache is in the kernel, such a system should be suitable for use as a file server under at least a few circumstances for a few years yet.

    While we are living in a time of "big data", some useful databases still exist that are only hundreds (or even tens!) of Mb in size.

    And there are dozens of other uses a 32-bit server can be put to even in this day and age. Chat server. DNS resolver. Time server. Dedicated Quake III server.

  10. anonymouscommenter says:

    Don't forget that there are governments that have crazy (?) restrictions and validation requirements. I heard about one in Europe that only uses 32bit systems because the driver of their special login device wasn't yet certified/validated by them for Windows 64bit. I guess they need the same login device on servers too so they probably stick to 32bit Windows server systems for now.

    And yes, they do have laptops with 8GB of RAM :-)

  11. Dan Bugglin says:

    Would still be an improvement over the 2k3 32-bit server at work. :)

    Can't update to 2k8 at all since we have .NET1.1 apps that our customer runs on there (to test our stuff against), and upgrading would break them.

  12. anonymouscommenter says:

    @Stefan Kanthak

    PAE only extends physical memory addressing not virtual addressing, so drivers would still only be addressing 32 bits of memory

  13. @Karellen: I think Raymond meant the physical address space, which without additional flags is limited to 2GB for user-mode processes in most editions of 32-bit Windows.  Processes still get their own virtual address spaces, regardless of whether TS is being used or not.

  14. > Why would [a mail server], of all things, absolutely need a boatload of RAM


  15. anonymouscommenter says:

    … which the kernel resp. its memory management can freely place anywhere in the PAE-addressable physical memory (see AWE too).

    But that's not the point: the device driver compatibility Raymond mentions meant that Microsoft feared that drivers written for 32-bit consumer versions of Windows might fail on systems with more than 4GB RAM.

  16. anonymouscommenter says:

    Performance can be achieved with a multi-process server and user-specific (thus smaller) databases instead of the single-process single (big) database architecture of Exchange too.

    I did not follow Exchange lately, but does it still have the 75GB limitation for its JET database?

  17. anonymouscommenter says:

    Okay, but the driver is still addressing only 32 bits of virtual memory. Regardless of whether a virtual address is mapped above the 32 bit physical address space is irrelevant to the driver

  18. anonymouscommenter says:

    If nothing else, this is an excellent demonstration of how it's just impossible for Microsoft to do anything right. If you compromise progress for the sake of backwards-compatibility, people complain that you should just abandon all the old stuff and throw out compatibility and let people complain to the vendors about how their software doesn't work in the new environment.

    But when that approach is taken, as it was here, people complain that Microsoft should have continued to support the old, outdated hardware, and complain about driver compatibility issues as if they are Microsoft's problem to maintain compatibility with. It's just a no-win situation! In so many other posts, people have said that if software and drivers don't work in an environment, that is the vendor's responsibility to fix and Microsoft shouldn't hold back the march of progress just for that. After all, 64-bit hardware is commonplace now, and 64-bit computing has been available for quite a few years by this point.

  19. Karellen says:

    @MNGoldenEagle: User-mode processes don't get any access at all to the physical address space, and certainly not 2GB of it. User-mode processes *only* see a virtual address space. But the article which Raymond linked to (but did not write) states that all user-mode processes share a single address space.

    Note that the 2GB split is in the virtual address space seen by processes, with 2GB reserved for kernel addresses (in order to reduce/eliminate page table invalidations during a system call). This does not mean that 4GB of physical memory is split 2GB/2GB. For example, the kernel could be using the first 256MB of physical memory, and 15 other user-space processes could be using another 256MB each throughout physical memory. However, each process would see it's own 256MB mapped into the user-space half of its virtual memory space, no matter where that 256MB was residing in physical memory.

    (And that's without going into PAE, which complicates things in some ways, but simplifies them in others…)

  20. anonymouscommenter says:

    @Max: If MS said "supporting application X on 32-bit doesn't make business sense for us, as it's costing us more money than it makes", then that would be entirely reasonable.

    But saying "the software would be of no use to anyone" when it's trivial to demonstrate that there are reasonable uses to which the software could actually be put, is just… well, "obviously mistaken" is the most generous way of putting it that comes to mind.

  21. anonymouscommenter says:

    @Anon The physical address matters if the device is doing DMA (which is more devices than you might think, pretty much every NIC, HBA, USB host, and graphics card, in addition to others).

  22. anonymouscommenter says:

    And that's no limitation at all: each process virtual memory is addressed with only 32 bits.

    With more than 4 GB RAM usable per PAE you could but run more/bigger processes without the need for paging.

  23. anonymouscommenter says:

    @Karellen: "If MS said…".

    Raymond is speaking for Raymond, and — unless I've missed something — nowhere here represents that what he's writing as MS official policy.

  24. Jan Ringoš says:

    Anon, even as far as of Windows NT 3.51, 32-bit kernel mode drivers work over 64-bit addresses. See PHYSICAL_ADDRESS structure.

  25. anonymouscommenter says:

    At the end of the day, a company has to decide what makes sense based on cost-benefit. No matter how much you're confident that your software works on both platforms, you'd still have to test and support two products. The same analysis the Visual Studio team did when they chose to stay on a 32-bit architecture for a while, although they could have offered a 64-bit version at the same time for those less-than-zero-point-one-percent of users that try to load very large solution files.

  26. Anonymous12778 says:

    @The MAZZter: My company ran a home banking product built on ASP.Net 1.1 on Windows 2008 for a couple of years.  The way backwards compatibility is handled with older versions of .Net is to simply install the older version of the framework, which you certainly can on Windows 2008.  I think you have some problem other than the version of the .Net framework.  

  27. anonymouscommenter says:

    @Tristan Miller

    That's not something the device driver worries about though right? That's a hardware issue? I admit my knowledge of DMA is very slim so I am unsure, but I thought the point of DMA was so that memory access did not have to go through the CPU, thus the device driver wouldn't be affected by DMA either

  28. anonymouscommenter says:

    DMA uses physical addresses, so the physical memory for the DMA source and destination must be addressable by the device itself.  If the device only has 32-bit registers for the source/dest addresses, then the physical memory reserved for that device's DMA operations must be accessible via 32-bit addresses.  DMA ranges are not usually so large that they would eat too much of the reserved physical memory.  For a video capture card the ranges could be significant, for instance.

  29. Alice Rae says:

    It's not that there's *no* application for a 32-bit server. It's that demand for 32-bit servers is low enough that Microsoft decided it wouldn't be worth putting time and resources into developing what would largely be a niche product.

    Capitalism at work, I suppose.

  30. anonymouscommenter says:

    It would seem to me that the definition of "server" here is "a machine that can run one of the following applications". Some vendors (MS among them) have chosen to write their applications so that they require a particular kind of licence on the OS. If you are unhappy with that, speak to the vendor. However, you can boot up a 32-bit "client" edition of Windows and use it as a file server or a database server or numerous other useful roles (in the informal sense of the word, not the "server roles" sense) so the lack of a 32-bit version of the fancy licence really isn't an issue for most people. This is fortunate for anyone who has a "server" that is basically one of the standard roles but with a little bespoke business logic thrown in, where the business logic is some tragic piece of 32-bit-only cruft and they haven't got vendor support (or the source) anymore.

  31. JM says:

    I see two curiously paradoxical stances on this:

    – People who argue that a 32-bit server can still do fine even with more than 4 GB of memory, by virtue of things like AWE/PAE and other hacks that are entirely superfluous on a 64-bit server. That's certainly true (up to a point, those hacks do mean accessing overhead), but then again, *those are hacks that are entirely superfluous on a 64-bit server*. Eliminating them simplifies things a good deal.

    – People who argue that a 32-bit server is a good fit if your server has 4 GB memory or less. It'll certainly *suffice*, but it disregards the fact that a 64-bit operating system can also run perfectly well with 4 GB of memory — though admittedly not as efficiently because the increase in address sizes means more will fall to overhead. Still, this is OS overhead we're talking about, since your 32-bit software will, with great odds for compatibility, still run fine (that may change someday, but someday is still years away). At my company we actually run plenty of 4 GB virtual servers with 64-bit OSes.

    Only if your hardware is literally incapable of running a 64-bit OS or your irredeemable 32-bit software is running into compatibility issues does a 32-bit OS really make sense. And if that's the case, you probably needn't be too concerned about not having the latest version of Windows available for those machines, since you should probably be averse to changing anything about that setup in the first place. Disconnect the thing from the Internet and the lack of Microsoft support shouldn't be an issue for as long as your hardware vendor will support you.

  32. Nico says:

    > Unless the server is a print server.

    Setting up and maintaining a Windows print server for the university department I worked for was one of the most frustrating and disheartening things I have ever had to do as a sysadmin.  The mess of PostScript and PCL, "universal drivers", vendor drivers which break, crash (sometimes even BSOD), lock up, leak memory, and blow up the Print Spooler is bad enough all by itself.  Add in trying to make sure all your clients (old and new Windows desktops (where printer connections are mostly per-user but can be coerced into per-machine with the right blessings but then registry permissions cause problems unless you edit them…), Mac OSX (versions 4, 5, 6, 7, 8, 9), Linux (in some ways the easiest (!), in spite of CUPS [aka, Can't Usually Print Stuff]), mobile, etc) *somehow* work correctly via SMB printing, the supposedly universal Internet Printing Protocol, or an unholy HP "cloud printing" /thing/…

    Someday I hope printing technology catches up with the rest of the world. It's been horrible for decades.

    [1] theoatmeal.com/…/printers

    [2] http://www.emmitsburg.net/…/computer_9.htm

  33. anonymouscommenter says:

    @Nico: on the bright side, discouraging people from printing is great from an environmental perspective.

    …sorry, just trying to find the silver lining here.

  34. anonymouscommenter says:

    @JM: The static amount of overhead on a 64 bit system is nearly 1GB once loading 32 bit tasks so it's a big difference. 64 bit system < 4GB RAM is just dumb. Maybe best would be to sell 32 bit Windows workstations with server-like SKUs (to enable terminal services for sysadmins and allow more clients to connect to shares).

  35. anonymouscommenter says:

    @Joshua: How many tasks are you really thinking of that are 32 bit only but push the limits of the RAM they do have?  The Linux guys tried to cater to this niche and developed an ABI they called "x32" which allowed access to all the other enhancements of x64 processors while still keeping 32 bit pointers and such.  It's been in the kernel for years and Ubuntu released packages for it at one point, but there really hasn't been much talk since 2013.  No one really cares because hardware is so cheap that there are very few situations where it's worth the trouble.

    If your machine can't have more than 4GB of RAM, get something better and throw cheap RAM at the problem until the little bit of overhead 64 bit adds to most applications is negligible.

    I'm with JM, the only remaining reason to want a 32 bit OS is because your old and non-updated application doesn't run on a modern system, and the correct answer is that you should have been working on getting rid of that garbage when you discovered that fact years ago.  It's not like 64 bit snuck up on anyone, even client-side Windows got it in an actually usable form (sorry XP64 team, way too many consumer device manufacturers just couldn't be bothered at the time) over eight years ago.  Excuses about time to test and such were valid until maybe 2010 at a stretch.  By then if a business' IT didn't know what applications would require replacement in the near future they weren't doing their jobs.  If a company still depends on programs that don't run on 64 bit systems for critical tasks there is a pile of incompetence somewhere in the stack.

  36. anonymouscommenter says:

    Of course, the lack of 32 bit support meant there was no upgrade path from Windows Server 2003 besides an over-the-wire migration and buying a newer, bigger server to achieve it – at which point, of course, the company I was supporting at the time found the path of least resistance was to dump Exchange entirely and switch to Google Apps.

    "Why would anybody want a 32-bit version of a server product?"

    Well, for most of the decade that happened in, 32 bit was the only offering Microsoft had for the small business market – and for that market, the idea of needing more than 4 Gb of RAM to handle a gigabyte or two of email does seem either comical or depressing. Yes, it's much easier to afford now, but at the time, the 32 bit system was more than powerful enough for the job, so the lack of upgrade path meant replacing Exchange.

    I'm a fan of x64, I'm even hoping to get some of our lab machines switched over this year or next, but the abrupt jump from 32-bit only to 64-bit only for SBS was a big pain at the time, and premature when the hardware didn't yet exceed 32 bit limits anyway!

    (Back in the present day, at the other end of the scale, we still have several Windows Server 2003 systems in active use where I am now. Hopefully getting replaced in the near future, but OS upgrades seem to have a bad habit of coinciding with rounds of staff layoffs around here…)

  37. cheong00 says:

    Actually we once have server that uses fancy fax card, and the fax card's vendor (Dialogic) is sold to Intel, and Intel charges the same price as the card itself to buy the new driver for Win2003, so we have to keep the server to run on Win2k.

    So no, the assumption of "servers don't run on fancy hardware" is simply not true.

  38. anonymouscommenter says:

    When I bought a home server in 2009, it had an Atom with 1Gb of RAM.  At the time, 2008 R2 was the current release, which wouldn't install on 1Gb (being 64 bit only and all.)  So I went with Linux.  My home server just doesn't need to support more than 100 terminal server users, or more than 10.  It does need server-y things like DHCP, DNS, support for trivial web and mail serving, etc.

    (Somewhat related, my desktop has 8Gb RAM and dual boots 64 bit Windows and 32 bit PAE enabled Linux, both of which work well.)

    Clearly for many customers who do need more scalability, 64 bit is the right choice.  But it's not the right choice for everyone, and it's a shame MSFT won't let people make their own choices.

  39. Yuhong Bao says:

    BTW, in retrospect I think adding PAE to XP SP2 would have been less painful than doing "XP" x64, though it would come at the cost of having to use AWE obviously. Especially when you realize for example only actual hardware drivers that do DMA needed to be updated.

  40. anonymouscommenter says:

    I don't think either side fully appreciates the others position.

    The user assumes that Microsoft could produce a 32 bit version very easily as it's just a compiler switch.

    Microsoft assumes the user has no possible use for a 32 bit version.

    Neither is correct in their belief. However the financial burden on Microsoft to do QA/WHQL/etc would far outstrip any benefit & Microsoft aren't a charity.

    IMO Raymond has rhetorically begged the question.

  41. Engywuck says:

    I was really grateful when Microsoft *finally* pulled the plug on at least the servers for 32bit. Before this quite a few software companies held the belief "well it runs on 32bit, why should we test for 64?" – even if it was marketed for server use. Next step will be to completely get rid of WoW64 ;-) Oh, and getting consumer-grade-OS to 64bit-only, too. But even Win10 will have a 32bit-version :-(

    Somewhat related: one of the problems with "32bit Windows" is that everyone expects that 16bit-Win3.1-era-programs still work on it…

  42. anonymouscommenter says:

    I agree that a 32-bit OS is kinda useless on contemporary hardware; but with the recent push for very small server VMs ("nano server") there might be some benefit in having a 32-bit build of Windows Server. The 32-bit versions of Windows typically use fewer resources, so I can run more nanoservers on my physical machine.

    (It is possible, though, that the higher typical resource usage of 64-bit Windows is mostly due to the omnipresence of WOW64 which doesn't exist in Nanoserver configurations anyway.)

  43. anonymouscommenter says:

    The problem with releasing BOTH 32-bit and 64-bit OSes is that now I can't write my application to target 64-bit exclusively.

    It won't work on a 32-bit OS, so management demands that I write a 32-bit application to satisfy all of the idiots still running 32-bit OSes. Since I'm now writing a 32-bit application, I don't have the resources available to target 64-bit OSes, or test that path.

  44. laonianren says:

    @Karellen re Terminal Server

    The linked article is vague (I suspect the author isn't entirely clear on this point himself) but I assume it's talking about *kernel* address space.  Every user mode process and all the files it opens and so on require kernel structures, so the size of the kernel address space limits the number of processes.

  45. @Karellen, laonianren: Agreed, it looks like this was written from a sysadmin perspective (and wasn't written by Microsoft, certainly).  Sysadmins don't always understand or distinguish between physical and virtual memory space (with maybe the exception of how it relates to swapping).  I would be extremely surprised if enabling Terminal Services made all processes share the same virtual memory space.

  46. Anon says:

    On a 32-bit OS, there's a 2gb application address space without PAE, 3gb with. And if you use PAE, you're stuck with 1gb of Kernel address space, so you're screwed there too.

    To rephrase: On a 32-bit OS without PAE, you get 2gb Application, 2gb Kernel, and that's *all the memory you can ever address*.

    On WOW64, you get a 2gb per-application address space.

  47. anonymouscommenter says:


    You're 12 years behind the times. Exchange 07 had limits of 50/250gb depending on version. The limit for Exchange 10+ was increased to *.

  48. alegr1 says:

    @James Sutherland:

    Cost for additional 4GB has been trivial for long time.


    A fax-modem connected by RS-232 would not require a IHV driver.

  49. anonymouscommenter says:

    Congratulation: 3 errors in a single post!

    The default virtual address space for user/kernel with PAE as well without PAE is 2/2GB, not 3/1GB.

    The /3GB switch is independent of PAE.

  50. alegr1 says:


    On WoW64 you get 4GB for applications with LARGEADDRESSAWARE flag.

  51. anonymouscommenter says:

    @alegr1  You don't understand what a "fancy fax card" is. They have a T1 type interface and can receive 24+ faxes over hundreds of telephone numbers at the same time. They have special interfaces for handling large numbers of faxes simultaneously.

  52. anonymouscommenter says:


    >The user assumes that Microsoft could produce a 32 bit version very easily as it's just a compiler switch.

    It may not be quite that easy but they've already done the vast majority of the work by keeping 32 bit client versions.

    Personally I would have been happy if either Vista or 7 cut 32 bit support once and for all.  But the reason to do it is to simplify things.  Not because the 32 bit servers we've been using for decades have suddenly stopped being able to do their jobs.

    [Sure, the 32-bit client version ensures that the client components are still working well in 32-bit mode. But running client components is not why you bought a server product. If you want to run client components, then just run the client SKU. -Raymond]
  53. anonymouscommenter says:

    They why is simple and easy. Check the prices for Azure or Amazon VMs with 4+ GB of ram.

  54. cheong00 says:

    @Sam: That's exactly what I'm talking about. Modems connecting through RS-232 won't cut it.

  55. anonymouscommenter says:

    "If you want to run client components, then just run the client SKU."

    If only the client SKUs were not intentionally crippled, for example allow only 5 (or 10) incoming CIFS connections, …

  56. anonymouscommenter says:

    The reason to still want 32-bit versions of server operating systems is that you can buy very cheap hardware to run non-CPU and/or non-memory bound services, such as file servers (SMB, HTTP, whatever), notwithstanding that the caches will be smaller or invalidated sooner, streaming services, middleware components and services, load balancing and routing, etc.

    Although the used memory will usually be a function of the number of active connections, it'll be a shallow linear relation.  If you do things right, Of Course™.

  57. anonymouscommenter says:

    For many places, terminal server is there to allow users to access applications remotely that they use on their workstation. Thats why having Win7 32 bit and no server equivalent is a problem. You say "But running client components is not why you bought a server product" but that's pretty much one of the the exact use cases terminal server is there to serve. So you have apps that are not 64 bit compatible, users can run it on Windows 7, but not on Server 2008 R2.

Comments are closed.

Skip to main content