Revisiting 64-bit-ness in Visual Studio and elsewhere


[Due to popular interest I also wrote a piece that is “pro” 64 bits here]

The topic of 64-bit Visual Studio came up again in a tweet and, as usual, I held my ground on why it is the way it is.  Pretty predictable.  But it’s not really possible to answer questions about your position in a tweet, hence this posting.

I’m going to make some generalizations to make a point and you should really not use those generalizations to make specific conclusions about specific situations.  This is as usual in the spirit of giving approximately correct advice rather than writing a novel.
Let’s say I convert some program to a 64-bit instruction set from a 32-bit instruction set.  Even without knowing anything about the program I can say with pretty good confidence that the most probable thing that will happen that it will get bigger and slower.
“But Rico! More RAM better!  More bits better!”
In the immortal words of Sherman T. Potter: “Horse hucky!”
I’ve said this many times before, for the most part there is no space/speed trade-off.  Smaller IS Faster.  In fact, in a very real sense Space is King.  Or if you like Bigger is Slower.  Part of the reason we study space/speed tradeoffs is because they are exotic beasts and it’s important to understand how it is that using more memory, inherently more expensive, can strangely give you a speedup, and under what conditions that speedup actually persists.
Let’s break it down to two cases:
1. Your code and data already fits into a 32-bit address space
Your pointers will get bigger; your alignment boundaries get bigger; your data is less dense; equivalent code is bigger.  You will fit less useful information into one cache line, code and data, and you will therefore take more cache misses.  Everything, but everything, will suffer.  Your processor’s cache did not get bigger.  Even other programs on your system that have nothing to do with the code you’re running will suffer.  And you didn’t need the extra memory anyway.  So you got nothing.  Yay for speed-brakes.
2. Your code and data don’t fit into a 32-bit address space
So, you’re now out of address space.  There are two ways you could try to address this.
a) Think carefully about your data representation and encode it in a more compact fashion
b) Allow the program to just use more memory
I’m the performance guy so of course I’m going to recommend that first option. 
Why would I do this?
Because virtually invariably the reason that programs are running out of memory is that they have chosen a strategy that requires huge amounts of data to be resident in order for them to work properly.  Most of the time this is a fundamentally poor choice in the first place.  Remember good locality gives you speed and big data structures are slow.  They were slow even when they fit in memory, because less of them fits in cache.  They aren’t getting any faster by getting bigger, they’re getting slower.  Good data design includes affordances for the kinds of searches/updates that have to be done and makes it so that in general only a tiny fraction of the data actually needs to be resident to perform those operations.  This happens all the time in basically every scalable system you ever encounter.   Naturally I would want people to do this.
Note: This does NOT mean “store it in a file and read it all from there.”  It means “store *most* of it in a file and make it so that you don’t read the out-of-memory parts at all!”
This approach is better for customers; they can do more with less.  And it’s better for the overall scalability of whatever application is in question.  In 1989 the source browser database for Excel was about 24M.  The in-memory store for it was 12k.  The most I could justify on a 640k PC.  It was blazing fast because it had a great seek, read and cache story.
The big trouble with (b) is that Wirth’s Law that “software manages to outgrow hardware in size and sluggishness” applies basically universally and if you don’t push hard nothing ever gets better.  Even data that has no business being as big as it is will not be economized.  Remember, making it so that less data needs to be accessed to get the job done helps everyone in all the cases, not just the big ones.
So what does this have to do with say Visual Studio? *
I wrote about converting VS to 64-bit in 2009 and I expect the reasons for not doing it then mostly still apply now.
Most of Visual Studio does not need and would not benefit from more than 4G of memory.  Any packages that really need that much memory could be built in their own 64-bit process and seamlessly integrated into VS without putting a tax on the rest.   This was possible in VS 2008, maybe sooner.  Dragging all of VS kicking and screaming into the 64-bit world just doesn’t make a lot of sense. **
Now if you have a package that needs >4G of data *and* you also have a data access model that requires a super chatty interface to that data going on at all times, such that say SendMessage for instance isn’t going to do the job for you, then I think maybe rethinking your storage model could provide huge benefits.
In the VS space there are huge offenders.  My favorite to complain about are the language services, which notoriously load huge amounts of data about my whole solution so as to provide Intellisense about a tiny fraction of it.   That doesn’t seem to have changed since 2010.   I used to admonish people in the VS org to think about solutions with say 10k projects (which exist) or 50k files (which exist) and consider how the system was supposed to work in the face of that.  Loading it all into RAM seems not very appropriate to me.  But if you really, no kidding around, have storage that can’t be economized and must be resident then put it in a 64-bit package that’s out of process. 
That’s your best bet anyway.  But really, the likelihood that anyone will have enough RAM for those huge solutions even on a huge system is pretty low.  The all-RAM plan doesn’t scale well…  And you can forget about cache locality.
There’s other problems with going 64 bit.  The law of unintended consequences.  There’s no upper limit on the amount of memory you can leak.  Any badly behaved extension can use crazy amounts of memory, to the point where your whole system is unusable. ***
But, in general, using less memory is always better advice than using more.  Creating data structures with great density and locality is always better than “my representation is a n-way tree with pointers to everything everywhere”
My admonition for many years has been this:  Think about how you would store your data if it were in a relational database.  Then do slices of that in RAM.   Chances are you’ll end up in a much better place than the forest of pointers you would have used had you gone with the usual practice.  Less pointers, more values.
This isn’t about not wanting a great experience for customers, nothing could be further from the truth.  It’s about advocating excellence in engineering rather than just rubberstamping growth.  This is basically my “brand.”
* I don’t work on Visual Studio anymore, don’t read this as any indication of future plans or lack of plans because I literally have no idea
** there are significant security benefits going to 64-bit due to address randomization and you do get some code savings because you don’t need the WOW subsystem, but VS is so big compared to those libraries that doesn’t really help much, it was a big factor for MS Edge though
*** Also happens in MS Edge

Comments (26)

  1. Billy O'Neal says:

    The profiling tools that were added semi-recently make the language services look like small cheese — even doing a profile on a relatively small app easily creates traces in the 30GiB range, and it isn't practical for those traces to be indexed at profile creation time because you want profiling overheads to be as small as possible.

    Maybe they should move their bits out of proc 🙂

  2. ricom says:

    I wrote a slick profiling tool that was all streaming.  It has a one time index building phase.  The whole point of it was to use as little memory as possible in analysis and leave as much as possible for the disk cache.  If the file is dense you do pretty good.  But you need to be able to seek to answer some questions.  Lots of things scale beyond available memory…  

  3. Simon says:

    You didn't mention 3rd parties. VS is not alone in its own process. There are bazillions of 3rd party DLLs (some so called "3rd party" being in fact just other Microsoft or assimilated teams I suppose) loaded in that process. Take Resharper for example. It kills the whole thing. At the end of the day, you *must* close and reopen your VS for the next day. Is there any "out-of-process" model coming out for VS if it's gonna stay 32b?

  4. ricom says:

    VS has had the ability to do out-of-process extensions since 2008.  We did this for VSO for instance.  There's lots of cases where this makes a ton of sense.  I can't imagine that's gone.  VS extensions are basically all on equal footing in this regard.  I mean, at the end of the day it's some COM — how would we even know you had remoted it?  How would we know that the in-process part of your extension is talking to some out-of-process thing?  You don't even have to do it via COM proxies. You could use two cans and a string to talk to your main extension process.

  5. ricom says:

    Remember I do not work on VS anymore so I have no idea what their plans are… I'm just telling you how it was all those years ago when I was there…

  6. Fabio says:

    For a lot extension that not use UI in intensive way is possible move into out-of-process but for extension like Resharper that use the same graphics control (text editor) is not possible (IMHO) with high integration and great user experience.

  7. ricom says:

    Some are definitely harder.  The trick is going to be keeping enough data locally and yet keeping the bulk out of process.  But I think it could be done.

  8. @Simon / Fabio NCrunch is a fantastic example of this. A few versions back it moved the heavy lifting out of process to improve performance in VS, reliability as a whole, and also enable you to run tests in either 32 or 64 bit

  9. Michael says:

    I am not saying that I disagree with you, but your argument looks a little less complete when you don't mention the additional registers available in x64 over x86.

  10. ricom says:

    The registers don't up to a hill of beans for most workloads.  This is partly due to great L1 behaviour of [esp+xx]

    There are exceptions.

  11. Dave Shaw says:

    Rico, The "converting VS to 64-bit in 2009" link appears to be broken, I'm getting a Group Not  Found error.

  12. ricom says:

    Bah, the editor claimed it was right but it didn't work.  So I edited it back to what it was and it works now.  ./shrug

    In case it breaks again here it is: blogs.msdn.com/…/visual-studio-why-is-there-no-64-bit-version.aspx

  13. chmeee says:

    X86 gets one performance gain by going to 64-bit (though, the same gain is met with Linux's 'x32' runtime): More registers.  32-bit x86 is/was very anemic, at 8 registers, not all general purpose.  With the introduction of x86-64 this set of 8 became a set of 16, more or less general purpose.  With this, more registers means less need to spill to memory.  I don't know if Windows has a 'x32' ABI (64-bit CPU mode with 32-bit pointers and integers, en.wikipedia.org/…/X32_ABI ), but it could be a valid compromise vs going all 64-bit mode.

    Most other architectures gain nothing in 64-bit mode, except ARM, as the 32-bit mode is more or less a subset of 64-bit mode (I'm coming from a PowerPC background here, but it's similar for SPARC and MIPS), so a 'x32' equivalent ABI is effectively the 32-bit ABI already, and 64-bit mode has no gains in register set, only register size.  So, for programs which stick with 32-bit constraints (arithmetic, pointers, etc), going to 64-bit mode is very much a net loss.

  14. Alois Kraus says:

    VS has become better at large solutions but it is still not in the good region. With Roslyn things have worsened a bit in terms of VS responsiveness due to code analysis and GCs (I guess). That really should be fixed. Why don't you get Joe Duffy on board after Midori was cancelled? He really knows what it takes to program efficiently in .NET while improving the compiler, code gen and all other important aspects.

  15. ricom says:

    Suffice to say that Joe and I talk regularly 🙂

  16. mnmr says:

    While I applaud the principle of excellence in engineering, the realities of life are that you are stuck with a behemoth IDE, with the vast majority of the code probably never to have been subjected to any kind of performance testing, let alone optimization. At least that is how it feels in daily usage. Throw in a handful of essential plugins and the x86 memory limit really is problematic.

    A 64-bit VS could jettison lots of baggage kept around for compatibility reasons, and at least let people with enough RAM respective to their solutions go about their business, without first having to convince the internet to embrace engineering excellence. It seems like a futile approach.

    VS should use plugins for its own core functionality, allow people to enable/disable bits they don't need, and it should collect and make available a performance/memory metrics for plugins. Nothing helps people improve code as well as a wall of shame.

  17. macros ftw says:

    You throw out the macro editor, I supposed that was because you removed support for COM?

    Many arguments for not going 64-bit are like arguments for not going multithreaded. Cache sizes, responsiveness, etc. Ignoring multithreading today, year 2016, when there will be no relevant multithreaded performance increases ever, is of course ridiculous. When you have tons of parallel hardware at your disposal, it is actually much easier to solve problems in a performant way by using multi core, many tasks can't even be solved in a timely manner by not using parallelism.

  18. Sebastian says:

    There's a difference between idealized arguments and arguments based on reality. Yes, it would be *best* if all apps just write slick systems where data gets paged in and out as a needed, using domain knowledge to make it more efficient than windows could do for you. In reality, though, that's a lot of effort and a lot of people just never get around to it (e.g. the VS language services you mention).

    So do you want a program that scales "by default", even if it doesn't do it in the most efficient way possible (because it relies on the OS to do all the work), or do  you want a program that *could* be written to scale but isn't, and thus doesn't scale? Because in many (most?) cases, those are your options.

    It sucks, but people just don't spend the effort to architect sophisticated systems for scaling. It would be better if they did, but using 64-bit processes could at least get you, say, 80% of the way there with no extra effort (and you can get closer without having to implement a full blown memory paging system like you would in a 32 bit process.. for example do *just* demand loading for the initial load (i.e. lazy initialization) and then let Windows take over for subsequent unloads and loads of the data)..

  19. ricom says:

    I've always taken the path of demanding the best… At least the best for the opportunity cost.  This is frequently not the easiest, but then easy isn't my brand.

  20. HarryDev says:

    I agree with ricom never go to 64-bit unless absolutely necessary, however, VS has one major problem with being 32-bit only, its designer does not work with apps that require 64-bit due to a native dependency or similar. Yes, we need the designer to be able to load the assembly since we are at design time connecting a ViewModel to the View to actually get some data in the views etc.

    This is a major problem. But I assume the designer could be out of process and really should….

  21. Sebastian says:

    The issue with demanding the best is that you're unlikely to reliably get it. So with 32 bits you have two kinds of programs: those that do heroics to scale by using clever algorithms and do so consistently all the time through the lifetime of the program. And then there's the ones that just don't scale very well. As you point out VS is in the latter category, and so is almost every other program.

    On technical merit, the downsides of 64 bit (ptr bloat) are far outweighed by the severe downsides of 32 bit (heroics needed for scaling to large workloads). So that just leaves legacy as a justification for 32 bit – some code really is hard to port to 64 bit, but it's a one time cost and does buy you a ton (especially for future development).

  22. Paul says:

    1) Building 64bit Visual Studio Shell based applications without having to go "Out-of-process"

    2) Being able to use the Resource editor with MFC/COM/ATL projects which house 64bit COM controls in the dialogs ( I currently have to install 32bit versions of the same COM controls to be able to even see my dialog!)

    are just 2 compelling reasons why I for one would like to see a 64bit option (note option!) for the VS environment.

    Maybe one day.

  23. Daniel Laügt says:

    With .nativs files and LegacyAddin keyword, we can write dlls for visualizing our own C++ types. The problem: it works only if those dlls are compiled in 32 bits!

  24. Remi says:

    I think we should port VS back to 16 bit – imagine how fast that would be!

  25. ricom says:

    16 bits is all anyone would ever need 🙂

  26. John Dubchak says:

    Sorry, not to nitpick, but Col. Potter always said, “horse pucky!”.

Skip to main content