64-bit Visual Studio — the "pro 64" argument


 

[I don’t have to go out on a limb to acknowledge that is the worst article I’ve ever written.  I wrote it in the wee hours one morning in a rotten mood and it shows.  There are far too many absolutes that should have been qualified and the writing style is too aggressive for no good reason.  I’m not taking it down because there are worthy comments, and I refuse to try to pretend it never happened.  But I absolutely regret writing this article in this way.  If you choose to read this, use a large sanity filter and look at some of the comments and the follow-up for qualifications to help see what I’m getting at.]

[This article is in response to an earlier posting which you can find here]

I’ve been reading some of the commentary on my post about 64-bit Visual Studio which is really about 64-bit vs. 32-bit generally using Visual Studio as an example and I have to say that for the most part, I’m pretty disappointed with the arguments being put forth in favor of 64-bits.

[Some less than charitable and totally unnecessary text removed.  I blame myself for writing this at 2:30am.  It was supposed to be humorous but it wasn’t.]

There is an argument to be made here, but there is also a great deal of ignoring of the real issue going on here.

Let’s actually go about doing the job of properly attacking my position the way I think it should be attacked, shall we?

I start with some incontrovertible facts.  Don’t waste your time trying to refute them, you can’t refute facts. You can have your own opinion, but can’t have your own facts.

The relevant facts are these:

-the same algorithm coded in 64-bits is bigger than it would be coded in 32-bits

-the same data coded for 64-bits is bigger than it would be coded in 32-bits

-when you run the same code, but bigger encoding, over the same data, but bigger encoding, on the same processor, things go slower

-any work I can possibly do has an opportunity cost which will mean there is some other work I can’t do

All righty, it’s hard to argue with those.

Now let’s talk about the basis I use for evaluation.

-I get points for creating a great customer experience

-I get no points for using technology X, only for the experience, using fewer technologies for the same experience is better than using more

-I get no points for using more memory, not even enabling the use of more memory, only for the experience, using less memory for the same experience is better than using more

OK, so in short, I begin with “64-bits gets no free inherent value, it has to justify itself with Actual Benefits like everything else.”

We cannot make a compelling argument with fallacies like “32 bits was better than 16 therefore 64 must be better than 32”, nor will we get anywhere with “you’re obviously a short-sighted moron.”

But maybe there is something to learn from the past, and what’s happened over the last 6 years since I first started writing about this.

For Visual Studio in particular, it has been the case since ~2008 that you could create VS extensions that were 64-bits and integrate them into VS such that your extension could use as much memory as it wanted to (Multi-process, hybrid-process VS has been a thing for a long time).  You would think that would silence any objections right there — anyone who benefits from 64-bits can be 64-bits and anyone who doesn’t need 64-bits can stay 32-bits.  It’s perfect, right?

Well, actually things are subtler than that.

I could try to make the case that the fact that there are so few 64-bit extensions to VS is proof positive that they just aren’t needed.  After all, it’s been nearly 8 years, there should be an abundance of them.  There isn’t an abundance, so, obviously, they’re not that good, because capitalism.

Well, actually, I think that argument has it exactly backwards, and leads to the undoing of the points I made in the first place.

The argument is that perhaps it’s just too darn hard to write the hybrid extensions.  And likewise, perhaps it’s too darn hard to write “good” extensions in 32-bits that use memory smartly and page mostly from the disk.  Or maybe not even hard but let’s say inefficient –from either an opportunity cost perspective or from a processor efficiency perspective; and here an analogy to the 16-bit to 32-bit transition might prove useful.

It was certainly the case that with a big disk and swappable memory sections any program you could write in 32-bit addressing could have been created in 16-bit (especially that crazy x86 segment stuff).  But would you get good code if you did so?  And would you experience extraordinary engineering costs doing so?  Were you basically fighting your hardware most of the time trying to get it to do meaningful stuff?  It was certainly that case that people came up with really cool ways to solve some problems very economically because they had memory pressure and economic motivation to do so.  Those were great inventions.  But at some point it got kind of crazy.  The kind of 16-bit code you had to write to get the job done was just plain ugly.

And here’s where my assumptions break down.  In those cases, it’s *not* the same code.  The 16-bit code was slow ugly crapola working around memory limits in horrible ways and the 32-bit code was nice and clean and directly did what it needed to do with a superior algorithm.  Because of this, the observation that the same code runs slower when it’s encoded bigger was irrelevant.  It wasn’t the same code!  And we all know that a superior algorithm that uses more memory can (and often does) outperform an inferior algorithm that’s more economical in terms of memory or code size.

Do we have a dearth of 64-bit extensions because it’s too hard to write them in the hybrid model?

Would we actually gain performance because we wouldn’t have to waste time writing tricky algorithms to squeeze every byte into our 4G address space?

I don’t have the answer to those questions.  In 2009 my thinking was that for the foreseeable future, the opportunity cost of going to 64-bits was too high compared to the inherent benefits.   Now it’s 2016, not quite 7 years since I first came to that conclusion.  Is that still the case?

Even in 2009 I wanted to start investing in creating a portable 64-bit shell* for VS because I figured the costs would tip at some point. 

I don’t work on Visual Studio now, I don’t know what they’re thinking about all this.

If there’s a reason to make the change now, I think I’ve outlined it above. 

What I can say is that even in 2016, the choice doesn’t look obvious to me.   The case for economy is still strong.  And few extensions are doing unnatural things because of their instruction set – smart/economical use of memory is not unnatural.  It’s just smart.

*the “Shell” is the name we give to the core of VS (what you get with no extensions, which is nearly nothing, plus those few extensions that are so indispensable that you can’t even call it VS if you don’t have them, like solutions support — that’s an extension]


Comments (38)

  1. HarryDev says:

    The reason to change at least something is to let VS be able to appropriately work with 64-bit software developed inside VS, such as the Designer (WPF), which is a major problem for us so we have to have 32-bit version as well as 64-bit even though we only use 64-bit in production for other reasons.

  2. Damien says:

    Would it still be the same code though? Doesn't x64 give you more registers? Possibly offering better optimizations?

  3. ricom says:

    Damien that's really the point.  Sometimes it isn't the same code.  But as it turns out the extra registers don't help an interactive application like VS very much, it doesn't have a lot of tight compute intensive loops for instance.  And also the performance of loads off the stack is so good when hitting the L1 that they may as well be registers — except the encode length of the instruction is worse.  But then the encode length of the 64 bit instructions with the registers is also worse…

    So, ya, YMMV, but mostly those registers don't help big applications nearly so much as they help computation engines.

  4. KooKiz says:

    Back in VS2010 days, I would have thought that 64-bits was really necessary, as my IDE was crashing every 2 hours or so. Every time you installed a new extension, you'd have to ponder whether the productivity gain provided by the extension outbalanced the reduced time-to-crash. For instance, removing Resharper improved stability great deal, but restarting VS every 5 hours instead of every 2 hours wasn't worth losing all the nice features.

    Since VS2012, things have improved great deal, and I've got to say that I rarely ever see my VS2015 crashing. So, whatever it was, I guess 32/64 bits wasn't the real issue after all.

  5. Moschops says:

    "because capitalism."

    I think you mean "because of the free market". Capitalism isn't the same thing, and it's quite possible to have either of the two without the other.

  6. ricom says:

    I thought I meant capitalism… Meaning if was good people would use it to make a buck.

  7. ricom says:

    You know I've been thinking about my 2nd article since I wrote it a few hours ago. And maybe I shouldn't be writing things at like 3am but anyway. I think I can net it out pretty much like this:

    If you find yourself running out of space you are going to be in one of two situations:

    1 If you stop doing some stupid thing you will fit fine into 32 bits of address space.

    OR

    2 If you start doing some stupid thing you will fit fine into 32 bits of address space.

    In 2009, the situation in VS was definitely #1.

    The question is, is that still the case in 2016? Because if it isn't then #2 really shouldn't be countenanced.

  8. ricom says:

    I have removed an early paragraph because rather than being funny it was actively detracting from the article and it was, to quote a redditor, "Not very charitable."

  9. TFries says:

    I'd suspect that progressing technologies that have made the development process more data-intensive like Roslyn keep longer term copies of the AST around, and IntelliTrace keeping sizable debugging history have made the case for greater address space size a lot stronger since you last visited the topic, but I don't know that I'm personally convinced the tipping point for needing a 64 bit VS has been reached *yet*.

    But if you ask me, moving toward a multi-process model makes a lot more sense for VS than simply just going 64 bit.  Maybe not if you look at it through a solely perf-based lens, but for reliability, extensibility, and testability reasons continuing to put the application's eggs all in one basket of a process makes less sense over time as complexity invariably increases.  It'd also indirectly address one of the needs for VS to even be 64 bit by relaxing the stress on a single address space; and if some theoretical future feature needed 64 bitness for whatever reason, it alone could make that decision without having to justify dragging the rest of the product along with it.

  10. ricom says:

    That was certainly how I felt in 2009.  I'm not sure in 2016.  But then I'm not on the team so how would I even know anymore πŸ™‚

    I do think hybrid has a lot of merits but it's not right for everything and the need to have more and more of the AST loaded seems to be only growing.  Though frankly I wish we could do this with less of that tree actually resident.

  11. DWalker07 says:

    One commenter on the other article mentioned Resharper.  Is the issue just that the Resharper library wasn't done in the best way?  

  12. Brian says:

    One thing to note about your 16->32 analogy is that back in the 16 bit days, all but the most trivial code (or, in some cases, really optimized code) usually kept 32-bit pointers.  You could write code using 16-bit pointers, but there's a reason that "lp" was probably the most used Hungarian prefix.

    Then again, I mostly went from 8-bit embedded Z-80s to 32-bit Windows without doing much in the way of 16-bit at all.  When I did write 16-bit code, I did not spend a lot of effort tweaking it for performance (trying to keep it readable was a larger goal).

  13. Rob Jellinghaus says:

    There's one big aspect of the discussion that's missing so far in these posts, I think.  You profess to be unaware of what Visual Studio is doing since you left, but I think you know about Roslyn, yes?  Rewriting the C# compiler entirely in C#, using a C# object model for the entire program representation?

    That's as pointer-rific and "wooden" (to cite your "wood vs marble" analogy from blogs.msdn.com/…/coding-in-marble.aspx ) as you can get.  BUT, it is also a very productive way to program — you have the full power of the idiomatic language, your debugger natively understands all object relationships, the GC is handling all your memory management, etc.

    You know and I know that our work in Midori to facilitate the "marble" pattern had a lot of complex code to facilitate interop between the "wooden" world of pointerful objects on the heap (natively supported by C#), and the "marble" world of tabular, dense object representations (requiring a bunch of custom C# libraries and carefully structured types).  The "marble" world was a lot of work to build and was fairly rocket-science-ful.

    So, to bring this back to the 64-bit analogy:  one big advantage of pointer-ful OOPy code is that it's very well supported by the language.  But if your object graph grows larger than 4GB, then currently 64-bit is your only hope.  Of course you could easily wind up dead anyway if you are traversing huge regions of that graph… but maybe you're not; maybe you're pointer chasing in a relatively structured way, and you're not thrashing your cache to death.

    I think if we really believe that denser object representations are a good thing, we need to do a lot more to make them easier for programmers to use.  In the limit, I would like to have a language that let me almost entirely abstract away from object representations — I can just write "Foo foo = new Foo();" regardless of whether Foo is made of wood or marble.  And I would like to be able to say "Bar bar = new Bar(foo);" in the same way, with seamless interop from the programmer's perspective.

    Until we do this, I think the "wood or marble" choice is almost always going to wind up being wood for most programmers, because that's what the language facilitates.

    There are some interesting experiments in transforming object representations "under the hood" of the language — for example, infoscience.epfl.ch/…/ildl.pdf and http://www.hirschfeld.org/…/MattisHenningReinHirschfeldAppeltauer_2015_ColumnarObjectsImprovingThePerformanceOfAnalyticalApplications_AuthorsVersion.pdf — I don't believe either of these are production-ready by any means, but they point towards what I think is needed to really make marble popular πŸ™‚

    In other words:  if marble were easier, people would want 64 bits a lot less!

  14. Rob Jellinghaus says:

    Addendum: looks like the "wood to marble" post wasn't making the point I remembered it making.  Your "LINQ to Memory" idea is more like what I meant:  blogs.msdn.com/…/quot-linq-to-memory-quot-another-old-idea.aspx — imagine that this was a real facility of the language, and that the language enabled you to put [Dense] attributes on classes that you wanted to store in a columnar representation….

  15. km007 says:

    @TFries

    A couple of years ago I would have wanted VS to move to a multi-process module for reliability but it seems to crash far less often. I don't know if the plugins have improved or they're just less able to do damage when something goes wrong.

  16. teo says:

    As an actual developer of VS extensions (and addins, remember these?) and a guy who routinely deals with data which cannot fit into the ram of the computer, let alone in the less than 1 GB heap of the 32-bit VS I strongly object to the "facts" you mention.

    1. the same data coded for 64-bits is bigger than it would be coded in 32-bits – except when the data is text. One of my tasks some years ago was to find a needle in a haystack, where the haystack was in the form of 160 GB text file. Recompiling my program for 64bit gave me almost 10% speed boost. So as all performance related tasks, giving blanket statements like "64bit code and data are larger, therefore, slower" is wrong. The right approach is "measure for your case and decide".

    2. The actual issue with VS is not how much memory it has, but the heap fragmentation. Considering everything loaded in-proc, VS has around 900 MB of heap and after that it goes into crash-happy mode. Granted, in recent versions things have changed and I've seen it with 1.1-1.2 GB working set.

    3. Nowadays projects are a bit more complex than in 89. You said: " In 1989 the source browser database for Excel was about 24M.  The in-memory store for it was 12k." Well, in 2016, my solution may consist of a project written in TypeScript and node.js (the server backend), Java (android client), objective-c (iOS client), and WPF – the desktop client. it may also have C++ bits for speed-critical code. Every project in my hypothetical solution has their intellisense, debugger, profiler etc. This is before I start the SQL tools because, well, I need to design my data. So, the requirements have changed and whatever worked in 89 is no longer applicable.

    4. process boundaries are very expensive to cross in Windows API. Every cross-process communication is very complex to design and implement and slow when executed. That's why even VS cheats. For example, the WPF designer is completely stand-alone program, which just creates a Win32 window as a child of the win32 window created for a WPF-based tool-window inside VS and call it a day. A neat hack, which is well-supported by the underlying Win32 API, completely unsupported by WPF and gets the job done.

  17. teo says:

    Continuing my rant:

    5. "I could try to make the case that the fact that there are so few 64-bit extensions to VS is proof positive that they just aren’t needed." No. People has to deal with the bugginess and API changes of VS. For example, we – the VS extension developers, have to ship addins for older versions of VS (we actually have paying customers so the MS excuse "we do not support these versions" does not apply). We have to supply extensions. We want the code between these to be shared. We want our code not to crash randomly. The most peculiar bug of the VS extensibility API I've found so far is that when asking for the color of the text in the text editor through the documented APIs used to result in a hard crash of VS in like 5% of the calls. We had to code around instabilities like this for decades. And no sane person would build upon a flashy new multi-process APIs knowing well that even way simpler VS API are unstable or crash VS right away and they are unsupported for other version of VS one need to support. So, we put everything in-proc, and hope that the next SP of VS does not break our code AGAIN (happened more than once).

    Additional obstacles were that the documentation for the VS SDK was poor and insufficient and hard to access. Also, the engagement of the VS team with the extension writes can be described as "non-existent". Note: around VS 2012 I stopped actively developing for VS but keep talking to people who does. Their opinion is that things have changed for better.

    To wrap up my rant, the fact that there are little 64-bit VS extensions does not mean that they are not needed. It means there are legitimate technical and business obstacles for writing them.

  18. Ooh says:

    @teo: In general I agree (though I didn't have an experience like your number 1 yet); but I think number 3 is not fair and incorrect. "So, the requirements have changed and whatever worked in 89 is no longer applicable." Things haven't changed that much, because all of the mentioned code need not run in a single process. As Rico said, pretty much always there's a big bunch of data that you need to have (24M was huge in '89, like say ~16TB today), but you only need a comparatively small index for many/most of the relevant things the user does regularly.

    Most important in engineering is to find the balance and bucket your data accordingly.

  19. Simon says:

    Your point about 2009 vs 2016 is super important. I'm following your blog since then (at least), and I was ok with VS staying 32b in 2009, I mean I understood it perfectly. But now is the time for things to change. VS has become this bloated memory eater. VS has to force out-of-process model at least for some extension categories, just like Windows Explorer did with shell extensions. And this is really a VS remark, not a general 64b vs 32b remark – maybe you picked the *wrong* example πŸ™‚

  20. Anna-Jayne Metcalfe says:

    I can back up a lot of what @teo said.

    I've been working on VS plug-ins since VC6 days, and keeping up with breaks in the VS interfaces is tricky enough that having a 64 bit version of our VS plug-in to support is quite simply not something we'd want to do unless it was necessary or our customers really needed it. Note that the code itself is already 64 bit ready for the most part as we already have a 64 bit version of the same plug-in for Eclipse by necessity (Eclipse has both 32 and 64 bit versions).

    Historically at least every second version of Visual Studio seems to bring painful surprises somewhere in the plug-in interfaces. For example, we had to make rather major changes to integrate into VS2005 (broken command bar and VCProjectEngine interfaces), VS2010 (broken command bar interfaces – again!), VS2012 (theming) and VS2015 (removal of add-in support). Some of those changes were unannounced or were sprung on the wider community by the VS team with little warning (I'm looking at you, "Introducing the New Developer Experience" – blogs.msdn.com/…/introducing-the-new-developer-experience.aspx) so when a new version of VS arrives, the first question we have to answer is always "how big a headache will this one cause us?".

    Against that background quite frankly adding a 64 bit build to the mix would be a distraction we could do without, but we'll do it if we have to.

  21. Andre Kaufmann says:

    2 points are missing – how will you develop and debug a 64 bit WPF application using 64 Bit native Dll's or by not using CPU Any compilation with a 32 bit Visual Studio version?

    Additionally, while it's true that commonly memory isn't an issue, some type of applications need not the physically memory itself, but they need the larger address space of virtual memory.

    Besides your statement "the same data coded for 64-bits is bigger than it would be coded in 32-bits" is IMHO not quite correct – why should a data structure formerly compiled to 32 bit now be larger under 64 bit? Or which kind of data do you mean?

  22. Elkvis says:

    Having a program that runs in 64-bit mode does not mean that it automatically uses more memory, simply by virtue of being 64-bit.  Even when building a 64-bit program, an int, the most commonly used data type, is still only 32 bits on Visual C++.  The only thing that implicitly gets bigger is pointers.

  23. D.E. says:

    All your points are irrelevant in 2016.  The same code and data are bigger in 64-bit… so what?  This is the era where most home users use at most 10% of their massive 6Tb hard drives, and it's typically with media.   The same code and data but encoded bigger run slower on the same processor, doesn't hold water either since modern CPUs are designed and optimized for 64-bit code execution.  That's 64-wires across the CPU's die hitting 64 transistors in parallel in 64-bit solid state logic circuits.  In fact, most cases prove the opposite to be true… 32-bit code across a 64-bit processor runs slower because the processor considers it a special condition.  Here's a question… walk through your office and see how many PCs are running a 64-bit operating system.  Why slow down my programming by piping it through yet another layer of conversion to go from 32-bit code to run on my modern 2016 64-bit PC or server? Wouldn't the "fastest" code be going through as few useless platforms, scripting engines, VMs and conversion layers as possible?

  24. Ward Durossette says:

    I used to long for a 64 bit version on Visual Studio, because I had to install two Oracle clients, 32 and 64 bit. The 32 bit Oracle client was there just to run debug in VS, the 64 bit for Toad and production apps.  Oracle has since released a fully managed client that is bit depth agnostic and we jumped onto that so fast your head would spin.  So now, I really don't care.  

    Having said that, the point is that being 64 bit across the entire stack offers advantages and less problems.  I still have 99 problems but getting the bitness right ain't one of them.  And I used to have to go around and fix it for my people as well. Overall the cost of maintenance is higher with mixed 32 and 64 bits.

  25. Steve Naidamast says:

    As I am not an internals specialist as many of the people responding here but an application developer, I still feel as if I understand much of what is being said in these responses.

    However, we should note that with each successive increase in power of our micro-processors, coding becomes sloppier since many of the restrictions that imposed tighter coding such as memory constraints are now relaxed, making development increasingly complex for many developers.  So many tools and paradigms have become available to application developers that keeping abreast of them all for career security has become an impossible endeavor.  This is why I have always advocated a return to the basics with which quality applications can be written as easily as with the more currently available complex tool-sets.

    With increases in power there is always a downside, namely that things that couldn't be done before can now be done more easily.  The question is, is any of it necessary when we got along fine without the new tools?  Not really, since we develop applications to the same foundations as previously.  Younger professionals today have simply opted for more comp0lexity believing they are creating better applications in a purer fashion.  They are not but the perceptions are there.

    If a 32bit Visual Studio will work just as well as a 64bit one in developing quality applications I can see the reticence on Microsoft's behalf in creating a fully compliant 64bit version.  In some ways the 64bit version will run faster and in some ways it won't.  As always, it depends on what is being done internally as the author of the article suggests.  So in the end we have to consider what actual value would a 64bit version of Visual Studio give us?  If not that much on the current available architectures than the question is rather moot.  However, if and when the underlying architectures change then maybe more concrete arguments can be provided for such a development.

    That being said, whether Visual Studio should have a 64bit version or not I leave to the better experts on this thread.  However, using a 654bit OS does have its advantages today since we do have more access to memory naturally without the workarounds of years ago.  And this does in fact allow us to do good things such as running VMs more efficiently to test out different ideas on different environments or run a variety of OSs while maintaining our preferred one as the host system.

  26. Klaus says:

    For me it is very simple:

    VS running as a 32 bit application has made the development of pure 64 bit applications more difficult for me.

    Therefore, a 64 bit VS would be great to have. It just makes things cleaner and requires less workarounds.

    This is why I very much hope that Microsoft will offer a 64 bit Visual Studio soon.

    Best wishes

    Klaus

  27. Billy O'Neal says:

    64 Bit VS would also allow deployment of VS into places where 32 bit code can't run; such as Windows Server without the Wow64 subsystem installed.

  28. Doctor Dan says:

    I couldn't agree more… unless the amount of the data that needs to be processed exceeds the size that can be handled by 32-bits without requiring endless swapping. 32-bit MS word, for example, chokes on 400 page documents.

  29. James Seed says:

    Having worked with segmented 16-bit, then 32-bit, then 64-bit, always under the Windows OS, I'm firmly convinced that the introduction of the 64-bit Windows OS, and the requisite support for it in VS, was superbly timed.

    Initially, Intel came up with an absolutely *horrible* implementation of 64-bit, then AMD came up with an absolutely *beautiful* implementation of it,  Then Intel, to their credit, swallowed their pride and admitted that AMD's implementation was *fantastic*, essentially trashed what they had come up with, and adopted AMD's standard verbatim.

    Now, perhaps for the first time in history, we have a simply gorgeous development environment.  In 64-bit, if your C++ or assembly functions have four parameters or less, ALL of them are passed in registers.  No ifs, ands, or butts.  This is a HUGE bonus to efficiency, regardless of how big your memory cache is.  I mean, let's face it, an average program has hundreds, if not thousands, of functions, each of which may be called a huge number of times.  For four parameters, that's eight writes to the stack, and who knows how many reads from the stack, every time any one of hundreds of functions is called.  That's all gone now.  Poof.  For all intents and purposes, all of the incalculably huge number of (initial) data transactions between every caller and every callee in a 64-bit program is now done INSIDE THE CPU – no memory required.  So excuse me for saying so, but I think that's HUGE.

    Anyway, I have more to say on the matter, but I'm not even sure I'll be successful posting this, as I'm not a member, and as a general rule, not a joiner either.  So here's me attempting to post this..

  30. James Seed says:

    Well, much to my surprise, my previous post succeeded – kudos..

    So anyway – to me, the biggest advantage of using 64-bit is that – how should I put this? – it completely obliterates the restriction that I've always had to live with, and that I've always *hated* having to live with – that I couldn't write a program that could exploit ALL of the memory in my computer..  I mean, what's the point of shelling out hard-earned money for 64 gigabytes of RAM, if I can only actually *use* two or three gigabytes of it at a time?  To make the Windows OS look good?  To impress that cute girl who lives down the hall?

    Thing is, computers were invented to do math, which they do extremely well.  And the math is getting really, really interesting.  Neural networks, deep learning, genetic algorithms, rule-based emergence, symbolic processing – all of these emerging technologies and advancements, and much, much more, are all fairly accessible on the internet, and can be investigated further on a modern laptop (thanks largely to Moore's law and, oh yeah, the Internet) !!

    Oh, but not in 32-bit.  Sorry, Charlie.  Now get back to stocking the shelves will you?

    That's how I felt whenever I wanted to, oh, say, genetically evolve a neural network topology to perform a particular task.  Or investigate the properties of a non-trivial two dimensional rule-based automata.

    Sorry, Charlie.  "You're just a puny program, not an operating system.  You can't have access to all of your memory.  That belongs to Microsoft.  Buh-bye."

    With 64-bit, I feel like I own my computer again.  It doesn't belong to Microsoft anymore.  It belongs to me.  If I have 64 gigabytes of RAM, well heck, I'll use it ALL if I need it.  And the only thing I need Windows to do is shut up until I'm finished (which it's actually not especially good at doing, BTW, but that's another story).

    <continued due to word count limit>..

  31. James Seed says:

    <continued from previous post>

    Of course, I realize that not everyone is interested in exploiting gobs of memory.  But they can still use 64-bit.  The pointers are twice the size?  Well, yes, they are, but if that's a concern at all (in, say, a large linked list for example), you can simply use an integer offset into a pre-allocated (64-bit) memory block instead.  Not a big deal.  It wouldn't even slow down the processor much, because it would simply translate to an indexed access instead of a non-indexed access, the difference between which at the processor level is trivial compared to the amount of time spent actually retrieving the data..

    But even with pointers being twice the size, is that such a high price to pay for *never* having to worry about memory ceilings again?

    Or to put it another way, to finally relegate the entire issue of memory availability *out* of the software realm entirely, and finally be able to put it back to where it rightfully belongs, and has always belonged, in the first place – as a simple hardware issue?

    Personally, I think that tradeoff is a bargain, even without all the additional benefits that 64-bit programming has to offer.  Benefits like always passing parameters on the stack, a non-trivial increase in the number of CPU registers, 128-bit division, etc., etc..

    So there it is – my rant.  For what it's worth..

    Should you disagree in any way, please send all comments to I_Really@Do_Not_Care.com.  You will be promptly handled.

    Just kidding πŸ™‚  Really.  That was a joke.. πŸ™‚

  32. ricom says:

    That wasn't all rant at all, thanks writing!

  33. Leonardo Ferreira says:

    So, what are we, developers/troubleshooters supposed to do when a client sends a 3gb process dump? Because there is a 100% chance of a Out Of Memory Exception poping up and no debbugging happening…

  34. programmer says:

    Small test, just for fun:

    #include <iostream>

    #include <Windows.h>

    void main()

    {

    const int size = 1000000;

    const int count = 1000;

    int* a = new int[size];

    long sum = 0;

    unsigned long long tickStart1 = GetTickCount64();

    for (int c = 0; c < count; ++c)

    {

    for (int i = 0; i < size; ++i)

    {

    a[i] = i % 10;

    }

    }

    unsigned long long tickStart2 = GetTickCount64();

    for (int c = 0; c < count; ++c)

    {

    for (int i = 0; i < size; ++i)

    {

    sum += a[i];

    }

    }

    unsigned long long tickStart3 = GetTickCount64();

    delete[] a;

    std::cout << "sum " << sum << "n";

    std::cout << "1= " << (tickStart2 – tickStart1) << "n";

    std::cout << "2= " << (tickStart3 – tickStart2) << "n";

    }

    Compiled for 32bit:

    3 runs:

    1= 1482,1529,1528

    2= 281,281,281

    Compiled for 64bit

    3 runs:

    1= 406,343,327

    2= 234,188,188

    "You can have your own opinion, but can’t have your own facts." So please don't say that 64bit compilation is always slower.

  35. tbd says:

    Sadly you're wrong about your "relevant facts". The correct answer is it all depends. A 64-bit processor and application is better (faster, smaller code) for processing 64-bit data elements or larger than an 8, 16 or 32 bit processor because it takes fewer instructions and steps. When using extended instruction sets a 64 bit processor may be able to process more 8, 16 or 32 data elements in parallel than a smaller processor. Running a 32 app on a 64 bit processor and OS such as windows requires switching modes that a 64 bit app wouldn't.

  36. Darius says:

    Well, one thing for sure is that with more memory VS won't crash when it approaches 3GB of RAM (as observed in task manager). Clearly a big win.

  37. ricom says:

    It's always the case that answer depends but this case we're focusing on the relevant phenomena for large interactive applications.  These are going to have different characteristics than computational engines.

    If you look back at my article in perf quiz #14 I did experiments to simulate the consequences of pointer growth.  

    The microbenchmark above illustrates that in the absence of pointers you can do ok.  It's one of the dimensions in quiz 14.

  38. ricom says:

    Keep in mind I made these assertions for the purpose of illustrating the foundation of the con argument and I then proceed to shoot them down.

    However, you won't notice extra memory use if you stay small enough to stay in cache.