Revisiting 64-bit-ness in Visual Studio and elsewhere

[Due to popular interest I also wrote a piece that is "pro" 64 bits here]

The topic of 64-bit Visual Studio came up again in a tweet and, as usual, I held my ground on why it is the way it is.  Pretty predictable.  But it’s not really possible to answer questions about your position in a tweet, hence this posting.

I’m going to make some generalizations to make a point and you should really not use those generalizations to make specific conclusions about specific situations.  This is as usual in the spirit of giving approximately correct advice rather than writing a novel.
Let’s say I convert some program to a 64-bit instruction set from a 32-bit instruction set.  Even without knowing anything about the program I can say with pretty good confidence that the most probable thing that will happen that it will get bigger and slower.
“But Rico! More RAM better!  More bits better!”
In the immortal words of Sherman T. Potter: “Horse hucky!”
I’ve said this many times before, for the most part there is no space/speed trade-off.  Smaller IS Faster.  In fact, in a very real sense Space is King.  Or if you like Bigger is Slower.  Part of the reason we study space/speed tradeoffs is because they are exotic beasts and it’s important to understand how it is that using more memory, inherently more expensive, can strangely give you a speedup, and under what conditions that speedup actually persists.
Let’s break it down to two cases:
1. Your code and data already fits into a 32-bit address space
Your pointers will get bigger; your alignment boundaries get bigger; your data is less dense; equivalent code is bigger.  You will fit less useful information into one cache line, code and data, and you will therefore take more cache misses.  Everything, but everything, will suffer.  Your processor's cache did not get bigger.  Even other programs on your system that have nothing to do with the code you’re running will suffer.  And you didn’t need the extra memory anyway.  So you got nothing.  Yay for speed-brakes.
2. Your code and data don’t fit into a 32-bit address space
So, you’re now out of address space.  There are two ways you could try to address this.
a) Think carefully about your data representation and encode it in a more compact fashion
b) Allow the program to just use more memory
I’m the performance guy so of course I’m going to recommend that first option. 
Why would I do this?
Because virtually invariably the reason that programs are running out of memory is that they have chosen a strategy that requires huge amounts of data to be resident in order for them to work properly.  Most of the time this is a fundamentally poor choice in the first place.  Remember good locality gives you speed and big data structures are slow.  They were slow even when they fit in memory, because less of them fits in cache.  They aren’t getting any faster by getting bigger, they’re getting slower.  Good data design includes affordances for the kinds of searches/updates that have to be done and makes it so that in general only a tiny fraction of the data actually needs to be resident to perform those operations.  This happens all the time in basically every scalable system you ever encounter.   Naturally I would want people to do this.
Note: This does NOT mean “store it in a file and read it all from there.”  It means “store *most* of it in a file and make it so that you don’t read the out-of-memory parts at all!”
This approach is better for customers; they can do more with less.  And it’s better for the overall scalability of whatever application is in question.  In 1989 the source browser database for Excel was about 24M.  The in-memory store for it was 12k.  The most I could justify on a 640k PC.  It was blazing fast because it had a great seek, read and cache story.
The big trouble with (b) is that Wirth’s Law that “software manages to outgrow hardware in size and sluggishness” applies basically universally and if you don’t push hard nothing ever gets better.  Even data that has no business being as big as it is will not be economized.  Remember, making it so that less data needs to be accessed to get the job done helps everyone in all the cases, not just the big ones.
So what does this have to do with say Visual Studio? *
I wrote about converting VS to 64-bit in 2009 and I expect the reasons for not doing it then mostly still apply now.
Most of Visual Studio does not need and would not benefit from more than 4G of memory.  Any packages that really need that much memory could be built in their own 64-bit process and seamlessly integrated into VS without putting a tax on the rest.   This was possible in VS 2008, maybe sooner.  Dragging all of VS kicking and screaming into the 64-bit world just doesn’t make a lot of sense. **
Now if you have a package that needs >4G of data *and* you also have a data access model that requires a super chatty interface to that data going on at all times, such that say SendMessage for instance isn’t going to do the job for you, then I think maybe rethinking your storage model could provide huge benefits.
In the VS space there are huge offenders.  My favorite to complain about are the language services, which notoriously load huge amounts of data about my whole solution so as to provide Intellisense about a tiny fraction of it.   That doesn’t seem to have changed since 2010.   I used to admonish people in the VS org to think about solutions with say 10k projects (which exist) or 50k files (which exist) and consider how the system was supposed to work in the face of that.  Loading it all into RAM seems not very appropriate to me.  But if you really, no kidding around, have storage that can’t be economized and must be resident then put it in a 64-bit package that’s out of process. 
That’s your best bet anyway.  But really, the likelihood that anyone will have enough RAM for those huge solutions even on a huge system is pretty low.  The all-RAM plan doesn’t scale well…  And you can forget about cache locality.
There’s other problems with going 64 bit.  The law of unintended consequences.  There’s no upper limit on the amount of memory you can leak.  Any badly behaved extension can use crazy amounts of memory, to the point where your whole system is unusable. ***
But, in general, using less memory is always better advice than using more.  Creating data structures with great density and locality is always better than “my representation is a n-way tree with pointers to everything everywhere”
My admonition for many years has been this:  Think about how you would store your data if it were in a relational database.  Then do slices of that in RAM.   Chances are you’ll end up in a much better place than the forest of pointers you would have used had you gone with the usual practice.  Less pointers, more values.
This isn’t about not wanting a great experience for customers, nothing could be further from the truth.  It’s about advocating excellence in engineering rather than just rubberstamping growth.  This is basically my “brand.”
* I don't work on Visual Studio anymore, don't read this as any indication of future plans or lack of plans because I literally have no idea
** there are significant security benefits going to 64-bit due to address randomization and you do get some code savings because you don’t need the WOW subsystem, but VS is so big compared to those libraries that doesn’t really help much, it was a big factor for MS Edge though
*** Also happens in MS Edge

Comments (42)
  1. Billy O'Neal says:

    The profiling tools that were added semi-recently make the language services look like small cheese — even doing a profile on a relatively small app easily creates traces in the 30GiB range, and it isn't practical for those traces to be indexed at profile creation time because you want profiling overheads to be as small as possible.

    Maybe they should move their bits out of proc 🙂

  2. ricom says:

    I wrote a slick profiling tool that was all streaming.  It has a one time index building phase.  The whole point of it was to use as little memory as possible in analysis and leave as much as possible for the disk cache.  If the file is dense you do pretty good.  But you need to be able to seek to answer some questions.  Lots of things scale beyond available memory…  

  3. Simon says:

    You didn't mention 3rd parties. VS is not alone in its own process. There are bazillions of 3rd party DLLs (some so called "3rd party" being in fact just other Microsoft or assimilated teams I suppose) loaded in that process. Take Resharper for example. It kills the whole thing. At the end of the day, you *must* close and reopen your VS for the next day. Is there any "out-of-process" model coming out for VS if it's gonna stay 32b?

  4. ricom says:

    VS has had the ability to do out-of-process extensions since 2008.  We did this for VSO for instance.  There's lots of cases where this makes a ton of sense.  I can't imagine that's gone.  VS extensions are basically all on equal footing in this regard.  I mean, at the end of the day it's some COM — how would we even know you had remoted it?  How would we know that the in-process part of your extension is talking to some out-of-process thing?  You don't even have to do it via COM proxies. You could use two cans and a string to talk to your main extension process.

  5. ricom says:

    Remember I do not work on VS anymore so I have no idea what their plans are… I'm just telling you how it was all those years ago when I was there…

  6. Fabio says:

    For a lot extension that not use UI in intensive way is possible move into out-of-process but for extension like Resharper that use the same graphics control (text editor) is not possible (IMHO) with high integration and great user experience.

  7. ricom says:

    Some are definitely harder.  The trick is going to be keeping enough data locally and yet keeping the bulk out of process.  But I think it could be done.

  8. @Simon / Fabio NCrunch is a fantastic example of this. A few versions back it moved the heavy lifting out of process to improve performance in VS, reliability as a whole, and also enable you to run tests in either 32 or 64 bit

  9. Michael says:

    I am not saying that I disagree with you, but your argument looks a little less complete when you don't mention the additional registers available in x64 over x86.

  10. ricom says:

    The registers don't up to a hill of beans for most workloads.  This is partly due to great L1 behaviour of [esp+xx]

    There are exceptions.

    1. NightStrike says:

      Why would you say that the extra x86_64 registers “don’t add up to a hill of beans”? On the mingw-w64 project, we regularly see a minimum of a 15% performance increase by having the reduced register pressure. Historically, the VS optimizer has been better in practice than the GCC one used in mingw-w64 (though that gap has narrowed significantly in recent years), so I’d be very surprised that you feel this way.

  11. Dave Shaw says:

    Rico, The "converting VS to 64-bit in 2009" link appears to be broken, I'm getting a Group Not  Found error.

  12. ricom says:

    Bah, the editor claimed it was right but it didn't work.  So I edited it back to what it was and it works now.  ./shrug

    In case it breaks again here it is:…/visual-studio-why-is-there-no-64-bit-version.aspx

  13. chmeee says:

    X86 gets one performance gain by going to 64-bit (though, the same gain is met with Linux's 'x32' runtime): More registers.  32-bit x86 is/was very anemic, at 8 registers, not all general purpose.  With the introduction of x86-64 this set of 8 became a set of 16, more or less general purpose.  With this, more registers means less need to spill to memory.  I don't know if Windows has a 'x32' ABI (64-bit CPU mode with 32-bit pointers and integers,…/X32_ABI ), but it could be a valid compromise vs going all 64-bit mode.

    Most other architectures gain nothing in 64-bit mode, except ARM, as the 32-bit mode is more or less a subset of 64-bit mode (I'm coming from a PowerPC background here, but it's similar for SPARC and MIPS), so a 'x32' equivalent ABI is effectively the 32-bit ABI already, and 64-bit mode has no gains in register set, only register size.  So, for programs which stick with 32-bit constraints (arithmetic, pointers, etc), going to 64-bit mode is very much a net loss.

  14. Alois Kraus says:

    VS has become better at large solutions but it is still not in the good region. With Roslyn things have worsened a bit in terms of VS responsiveness due to code analysis and GCs (I guess). That really should be fixed. Why don't you get Joe Duffy on board after Midori was cancelled? He really knows what it takes to program efficiently in .NET while improving the compiler, code gen and all other important aspects.

  15. ricom says:

    Suffice to say that Joe and I talk regularly 🙂

  16. mnmr says:

    While I applaud the principle of excellence in engineering, the realities of life are that you are stuck with a behemoth IDE, with the vast majority of the code probably never to have been subjected to any kind of performance testing, let alone optimization. At least that is how it feels in daily usage. Throw in a handful of essential plugins and the x86 memory limit really is problematic.

    A 64-bit VS could jettison lots of baggage kept around for compatibility reasons, and at least let people with enough RAM respective to their solutions go about their business, without first having to convince the internet to embrace engineering excellence. It seems like a futile approach.

    VS should use plugins for its own core functionality, allow people to enable/disable bits they don't need, and it should collect and make available a performance/memory metrics for plugins. Nothing helps people improve code as well as a wall of shame.

  17. macros ftw says:

    You throw out the macro editor, I supposed that was because you removed support for COM?

    Many arguments for not going 64-bit are like arguments for not going multithreaded. Cache sizes, responsiveness, etc. Ignoring multithreading today, year 2016, when there will be no relevant multithreaded performance increases ever, is of course ridiculous. When you have tons of parallel hardware at your disposal, it is actually much easier to solve problems in a performant way by using multi core, many tasks can't even be solved in a timely manner by not using parallelism.

  18. Sebastian says:

    There's a difference between idealized arguments and arguments based on reality. Yes, it would be *best* if all apps just write slick systems where data gets paged in and out as a needed, using domain knowledge to make it more efficient than windows could do for you. In reality, though, that's a lot of effort and a lot of people just never get around to it (e.g. the VS language services you mention).

    So do you want a program that scales "by default", even if it doesn't do it in the most efficient way possible (because it relies on the OS to do all the work), or do  you want a program that *could* be written to scale but isn't, and thus doesn't scale? Because in many (most?) cases, those are your options.

    It sucks, but people just don't spend the effort to architect sophisticated systems for scaling. It would be better if they did, but using 64-bit processes could at least get you, say, 80% of the way there with no extra effort (and you can get closer without having to implement a full blown memory paging system like you would in a 32 bit process.. for example do *just* demand loading for the initial load (i.e. lazy initialization) and then let Windows take over for subsequent unloads and loads of the data)..

  19. ricom says:

    I've always taken the path of demanding the best… At least the best for the opportunity cost.  This is frequently not the easiest, but then easy isn't my brand.

  20. HarryDev says:

    I agree with ricom never go to 64-bit unless absolutely necessary, however, VS has one major problem with being 32-bit only, its designer does not work with apps that require 64-bit due to a native dependency or similar. Yes, we need the designer to be able to load the assembly since we are at design time connecting a ViewModel to the View to actually get some data in the views etc.

    This is a major problem. But I assume the designer could be out of process and really should….

  21. Sebastian says:

    The issue with demanding the best is that you're unlikely to reliably get it. So with 32 bits you have two kinds of programs: those that do heroics to scale by using clever algorithms and do so consistently all the time through the lifetime of the program. And then there's the ones that just don't scale very well. As you point out VS is in the latter category, and so is almost every other program.

    On technical merit, the downsides of 64 bit (ptr bloat) are far outweighed by the severe downsides of 32 bit (heroics needed for scaling to large workloads). So that just leaves legacy as a justification for 32 bit – some code really is hard to port to 64 bit, but it's a one time cost and does buy you a ton (especially for future development).

  22. Paul says:

    1) Building 64bit Visual Studio Shell based applications without having to go "Out-of-process"

    2) Being able to use the Resource editor with MFC/COM/ATL projects which house 64bit COM controls in the dialogs ( I currently have to install 32bit versions of the same COM controls to be able to even see my dialog!)

    are just 2 compelling reasons why I for one would like to see a 64bit option (note option!) for the VS environment.

    Maybe one day.

  23. Daniel Laügt says:

    With .nativs files and LegacyAddin keyword, we can write dlls for visualizing our own C++ types. The problem: it works only if those dlls are compiled in 32 bits!

  24. Remi says:

    I think we should port VS back to 16 bit – imagine how fast that would be!

    1. Kamyar says:

      I agree with Remi, let’s port VS to 16 bit. 1 Megabytes of ram is more than enough for any application!!

  25. ricom says:

    16 bits is all anyone would ever need 🙂

    1. PeterEnnis says:

      There are 10 types of programmers. Those that understand binary and those who don’t,
      so my 2 bits are enough if you don’t care about being fuzzy?

  26. James Johnston says:

    640k is enough for everyone! Not…..

    My computer has 32 GB RAM. It didn’t cost much. I could have equipped it with a lot more. I also have an SSD. It is very fast. It did not cost much, either. I once ran chkdsk /R on a different drive (not the SSD) and Windows 7 chkdsk used ALL my physical RAM (apparently by design). The swapping didn’t really bother me that much, and I actually didn’t notice the RAM usage until I tried to turn on a VM and got an error… So arguably, the SSD increases my available memory far beyond 32 GB.

    I agree with the sentiment that performance is important, and it’s good to try to improve it. So why don’t you use 32-bit versions of VS in your testing lab to make sure that works reasonably well, but also offer a 64-bit version of VS, especially for customers who have complex projects and/or 3rd-party plug-ins? 2 GB of address space looks increasingly limiting in this environment.

    Asking me to restrict VS RAM usage to only 6.25% of available RAM to me is hogwash. Because of some performance ideal that you’ll probably never reach, because VS is a complex ecosystem?

    It’s 2016. VS should be available in 64-bit, duh. I can’t believe this discussion is even happening.

  27. Corry says:

    Seems someone is forgetting about the extra registers offered in 64 bit. Yes program size will grow, but address space grows exponentially. Not a problem. Alignment is to increase load speeds by ensuring page boundaries (specifically in the cache size) aren’t split ensuring a single load. In short, its better in every way except size. VS is not an embedded application, so why the embedded mentality?

  28. Brian Catlin says:

    You couldn’t be more wrong. You’re directly contradicting the experience of the Windows team when they went from 32-bit to 64-bit for the build labs. The build time for Windows was something like 1/2 of what it was on the 32-bit systems. The most important benefit of moving to 64-bits is that there are twice as many general purpose registers available in 64-bit mode, drastically reducing the number of MOVs. No matter how fast your CPU is, all CPUs wait at the same speed, and x86 systems spend a great deal of time waiting for memory (which is why Intel was forced to put such large caches on their processors).

  29. paul says:

    Today 32GB of ddr4 ram goes for $103.99 (I just checked at amazon – 2x16GB crucial). Which mean every developer can have 64GB of ram without problems. Having 128GB is only a problem because few motherboards support that, but that’s probably going to change in two-three years.

    In this context there’s nothing inappropriate in loading solutions with 10k projects into ram – in fact that’s probably the best option, speed and complication wise.

    Single-threaded speeds have mostly stalled and are in fact relatively close to physical limits. Relatively to the limits for memory – a 4Ghz cpu with 10TB of on-chip memory is going to happen, there’s enormous amount of space once you go 3d, but 10ghz one (with memory or not) probably not, at least not with silicon. So the future is the exact opposite of what you propose: bigger memory use for the sake of simpler architecture and faster processing.

  30. AlanWill says:

    Why Windows10 having x64?

  31. Drew Golden says:

    Reasons I stopped developing on Micro$oft: 1.) Non-standards adherence – as in they make up their own. 2.) Technology left turns down abandoned alleys. (Dot-Net anyone?) 3.) Lack of keeping up with open source competition. Everyone here that chooses to develop MS will end up abandoned on an island of technology despair. Been there before, I will not ever go back.

  32. Martin Dobsik says:

    Well, if VS would be 64bit I could use ReSharper (3rd party plugin) with less problems . Now I have to restart it every now and then, because it tells me it run out of memory. Really? I have 64GB in the desktop PC.

    Would you stop arguing and make it 64 bit please? I can’t start reasoning with every 3rd party vendor (and Microsoft) to optimize thair code.

  33. Rune says:

    Meanwhile… As a Reflector user and somebody suffering with a solution file spanning 97 projects…


    There is very little headroom running the VS IDE these days. It does not look as if it has been compiled as large-memory-aware, so it is limited to a 2GB address space. At least that is where it maxes out on my rig. Quite annoying, seeing as I never use hardware with less than 16GB installed.

  34. Tom Kerrigan says:

    64-bit isn’t a pure win (bigger pointers) but it’s weird to read anybody complaining about 64-bit in this day and age.

    64-bit came to x86 in 2003, i.e., 14 years ago. It was mainstream in RISC chips long before that. MIPS went 64-bit in 1991 = 26 years ago. Literally almost 3 decades ago. Alpha was 64-bit from the outset in 1992. Sun/SPARC in 1995. POWER in 1998.

    The Nintendo 64 was a 64-bit machine that you could buy in 1996.

    It’s hard to buy a PHONE today that isn’t 64-bit.

    The right time to complain about 64-bit would have been 14 to 26 years ago, not December 2015.

  35. Gabe says:

    I’m a big fan of smaller is better so I understand the goal but I do not understand the decision. I’m also a big fan of standardization, the world is and perhaps already HAS moved to 64bit.

    Conversion to 64bit can be done while still maintaining low memory usage goals. Why the arbitrary 4G goal? Why 32bit and not 16bit as another poster noted? Why did I spend days last year running into VS out of memory errors when running code analysis?

    While I applaud the desire to continue streamlining the beast that is VS – I also do look forward to the day when I see only one Program Files folder and one System folder on my drive.

  36. Bob Noordam says:

    Visual studio 2013 was a very reliable workhorse. Visual 2015 introduced massive out of memory problems. VS 2017 is better, but it is still very easy to run into out of memory issues when working on large webforms or winforms solutions. Even closing the solution leaves a process with 1000-1800Mb memory allocation. The observation that everything since VS 2013 leaks memory somehow somewhere is easily made. If moving to 64 bit hides these problems under the carpet i’d gladly take that and leak 8Gb if i must a day and restart on a moment of my convinience. Memory and cpu spead is dirt cheap, while crashing out and restarting is _highly_ tedious.

  37. tpolm says:

    “The one who wants – seeks ways, who does not want – seeks a reason”

  38. Daniel Rowe says:

    I work on a solution with 100s projects and lots of source and I’m sure the development team do not test on these types of projects.

    VS2015 regularly locks up and uses close to 2 gig of memory. Similar memory with VS2017. Early versions of VS 2015 where terrible, out of memory crashes etc.

    Really need to cut that usage. It’s a text editor, admitly with features above text editors but all the same.

    Large projects are a use case for an IDE so should work well.

  39. John Dubchak says:

    Sorry, not to nitpick, but Col. Potter always said, “horse pucky!”.

Comments are closed.

Skip to main content