Behavior of 1.0/1.1 managed apps on 64bit machines


As I alluded to in my previous post there are multiple ways that we can go in terms of supporting legacy 1.0/1.1 assemblies on a Win64 machine. The context of the 1.0/1.1 support story is made somewhat simpler by the current plan of having both a v2.0 32bit CLR and a v2.0 64bit CLR on the box but no 1.0/1.1 CLR bits.


I mentioned that the 1.0/1.1 compilers didn’t know anything about “bitness”. Basically they spit out a PE image that said “Hey! I’m managed! Run me with the CLR!” (gross simplification), whereas the v2.0 compilers produce images that range from “Hey! I’m managed, and I can run everywhere!!” to “Hey! I’m managed and I only run on x86!” etc…


This brings us to the fundamental question of this post — what to do with 1.0/1.1 assemblies?


Option 1: call them “Legacy” assemblies since they don’t know about “bitness”. Require them to run in the WOW64 under the 32bit CLR as  we can’t say for sure that the developer who created them was thinking about 64bit compatibility when they were created (remember that many of these were created years before even a 64bit alpha of .NET was available at PDC last year). Additionally, make the loader get angry and spew something along the lines of “BAD_IMAGE_FORMAT” if you try to load a legacy assembly in a native 64bit managed process just as if you had tried to load a v2.0 assembly marked x86 only.


Option 2: treat them like the v2.0 notion of MSIL assemblies, allow them to be used from both 32bit and 64bit managed processes. By default if they are an exe kick off the 64bit CLR when someone tries to start them. This would cause them to run as a 64bit process even though their creators probably didn’t have that potential in mind when the code was written and tested.


 


Cases can be made for both sides. Right now the more conservative approach is “Option 1” which is what we are leaning towards. But there are definitely some negatives to that, the primary one in my mind being that is makes the transition to 64bit harder for groups that have dependencies on a lot of managed code that they don’t own but are willing to do the testing legwork to make sure they work in 64bit mode anyway. In effect it makes the 1.0/1.1 managed code assemblies much like 32bit native code components as dependencies for moving your app to 64bit because in that scenario we won’t let you load 1.0/1.1 assemblies in your 64bit process.


One of the great things about managed code is that frequently there isn’t much if any work to be done to move it to 64bit. But given “Option 1” above we would at least require the work of a recompile (though someone could imagine a tool that would be frightfully dangerous which would modify 1.0/1.1 headers to look like 2.0 headers to pretend to be a v2.0 compiled MSIL image… Please don’t do this!!). If you don’t own the managed code you’re using that means waiting for whoever does to recompile and give you the properly tagd version before you can move your app to 64bit.


Mind you, that is probably better than the alternative. If we were to just load up 1.0/1.1 images in a 64bit process expecting that they should be the equivalent of v2.0’s MSIL (which is what the compilers are currently producing as a default) you could end up with all manner of random execution failure, usually related to calling into some native 32bit code or other… “Option 2” would allow those who are willing do the legwork in testing to do their due diligence, test their application in a 64bit environment thoroughly even though it might contain 1.0/1.1 components and be able to say with reasonable confidence that their customers wont have problems running 64bit native. The fact that I id “willing to do the legwork in testing” and “due diligence” etc.. should be setting off huge danger signals in your head. How many people are willing to toughly test some component they paid for, isn’t that part of what you paid for??


There are of course all manner of “in-between” scenarios, few of which are supportable or justifiable, so for the purposes of this debate lets stick to these two options.


 


The main reason I started writing this post however wasn’t to make up your mind but to poll your thoughts…


Thoughts?

Comments (27)

  1. For executables, the conservative approach makes sense, but I think that for libraries you should assume them to be "neatral". After all, it is the responsibility of the app to test with the libraries. Of course, in more dynamic scenarios (e.g. user specified plugins) this isn’t so clear cut, but in any case, I would definitely not cut off the ability to use 1.0/1.1 libraries in 64bit.

  2. Michael says:

    I think that the conservative approach should only be taken if P/Invoke is actually used within the 1.0/1.1 assembly. In a pure MSIL assembly using no pointers or other advanced features ("safe code") the code should run without changes on 64 bit shouldn’t it?

  3. Jeroen — This is one of those times when internally we fight about phrases like "responsibility of the app to test" and phrases like "platform adoption blocker" and "what happens when my mom gets a ‘Fatal Execution Engine Error’"… As mentioned I happen to agree that your point is the major detractor from the conservative approach.

    Michael — Given safe code with no P/Invokes and such it is fairly seamless to move to 64bit. Generally just a compile with a v2.0 compiler. You could imagine taking the assembly, disassembling it with ILDASM, and then reassembling it making sure to use the v2.0 ILASM which I believe will give you a new assembly with "bitness" knowledge. Whether or not you could get away with shipping this and then have any reasonable expectation of support from the supplier of that component is questionable.

    Both: the general question at the runtime level is one of how conservative do we want to be with regard to trying to guess developer intent?

  4. Michael Entin says:

    The problem with conservative approach (option 1) is that one can’t ship a single binary that works now with .NET 1.1 and will work with 64-bit applications.

    Even if I’ve tested my Everett library or application on 64-bit Whidbey CLR, I have to create (and distribute) a separate binary(-ies) for Whidbey. If I compile assembly with Everett, 64-bit Whidbey would not load it. If I compile assembly with Whidbey, Everett would not load it. Thus I need two separate binaries.

    That distribution problem can be a serious blocker for adoption of 64-bit CLR usage.

  5. Michael — you’re correct in your assesment (assuming we go with "option 1"). But given the conservative approach you would actually need to create a seperate binary(-ies) just for testing purposes, at which point distributing it seems reasonable. Assuming that option you would still be able to run your application on the 64bit platforms under the 32bit CLR (which under 64bit extended architectures can still be very performant).

    While I was debating this very issue the other day with the 64bit CLR test lead he made the valid point that the two "adoption blocker" issues we’re dealing with at this point are "Windows64" adoption blocker vs. "64bit CLR" adoption blocker. Fundamentally there is a hierarchy there, not just for political reasons (said as a CLR dev) but also because without a Windows64 there isn’t any need for a 64bit CLR…

    If people who install 64bit Windows and then run into annoying crashes and such when they run random 1.0/1.1 managed code that they might have been downloaded from some site or other (which may not have been tested on 64bit) that is a Windows64 platform adoption blocker.

    If on the other hand we are conservative we end up with a situation where legacy code you install just works, albeit in a 32bit process. People who really need 64bit (a need that is dire for some but debatable for most) initially have to do some legwork to get there (namely recompiling and presumably testing). And, as we move forward more and more stuff runs natively under 64bit. Like the Win16 to Win32 transition, we don’t anticipate that everyone’s code will make the leap immediately.

    Either way, I really appreciate the comments!! Like I said this behavior is still under debate.

  6. mihailik says:

    Why don’t create config policy option for switch between theese cases?

    Developer will can apply publisher policy and administrator customize too. Is there any reason not to do so?

  7. Jason Baginski says:

    Wasn’t the whole point of moving to the CLR the fact that you didn’t have to optimize your software for particular systems anymore?

    If I’m going to have to start writing different versions, I might as well just go back to C/C++. I convinced my boss to go with .NET with the argument that we wouldn’t have to rewrite to support the latest/greatest as the CLR creates a level of abstraction and it handles all the low level hardware related issues/optimizations.

    If now there’s problems moving between platforms, then its an error with the CLR, not apps written for the CLR. If the CLR isn’t smart enough to run older CLR apps the way they ran on earlier versions, and for some bloody reason, we can’t have side-by-side versioning. Then that craps on the whole reason we moved our development to .NET in the first place.

  8. Barry Dorrans says:

    I’m afraid I’m with Jason here. If I’m running in a CLR environment, or a JVM environment I shouldn’t have to care what’s under the hood, the runtime should be taking care of that for me. I don’t care about big endian, or little endian if I target Compact Framework, Rotor or Mono, so why should a step up be a big deal.

    In fact the idea of there *being* targetted code in a CLR environment strikes me as a bit of a betrayal of the whole idea. Yes I do realise the problems with PInvoke assumptions, but darnit, it just feels wrong. If Windows 95 managed the thunking layer for Windows 3.1 code, why can’t there be a CRL thunk. Or maybe I just like to say "thunk" <g>

  9. Sorry I took so long to reply, I was out of town over the weekend and just got back.

    mihailik: It’s been discussed. And is a possiblity, though it might fall under the category of "more trouble than it’s worth". We don’t know all of the proper weights yet to make a fully informed decision.

    Jason/Barry: I’ll get into the "CLR should solve world hunger" discussion in a minute. But first, Jason, I would have to disagree with your assertion that we aren’t giving .NET apps a migration path to Windows64. That is what the WOW64 is about right? Just as 32bit non-managed applications move to Win64 in the WOW64 sandbox, so would (under the context of this discussion) 1.0/1.1 .NET applications, they will run as 32bit apps under the 32bit v2.0 CLR. Given that scenario it means that the CLR _IS_ "smart enough to run older CLR apps the way they ran on earlier version". Fundamentally the application users probably won’t be able to tell the difference without opening a debugger and looking at where the dlls are getting loaded from…

    As for the question of whether or not the CLR should completly insulate you the application developer from the transition from 32bit to 64bit code I’m afraid that I’m going to have to disagree again. I believe there are are some _very_ good reasons to write managed code (GC, Security, Advanced platform specific JIT (x86/AMD64/IA64), programming model that fits current programming paradigms, etc…). One of our top priorities is to make it easier for developers to write correct, performant and secure code. And I think that with this release you will see that it really is the exceptional case where correctly written managed code does need significant work done to make it run on both 64bit and 32bit (the cases mainly revolving around interop and unsafe code). These corner cases are one of the things that I really want to talk a lot about later on this blog.

    Managed code and the CLR has made some tradeoffs from things like the JVM in the name of interop, performance and supporting languages like Managed C++ which is fundamentally a "closer to the hardware" language, even in its managed version. This does fundamentally result in .NET managed code being a little bit closer to the hardware than say Java (excepting of course JNI), but not much.

    That said, as a software engineer would you be willing to ship your app for use on a 64bit machine without testing it first? I hope not. Generally, I think you will see that you don’t have to rewrite your application to support 64bit, though in your testing on that platform you may find a couple bugs that didn’t show up on 32bit as such (my next blog entry is a great case where a PInvoke signature was written to implicitly count on x86 calling convention, probably without the developer even thinking about it, that case was "technically" wrong all along, but only showed up as broken when the app ran on a 64bit platform).

    So, the question becomes one of "How much safety do we dial into the platform?" as most of that safety comes with some kind of cost or other. A good friend of mine from college is a mechanical engineer working at a large car company, recently we were hanging out at a friend’s wedding and had splerged on the car rental and got a Honda S2000. It is an incredible car, but he pointed out (as he has significant experience in auto dynamics benchmarking) that in his opinion the S2000 had the least safety dialed in from the factory of any street car he’d ever driven… What does that mean? Well, it’s a wonderfully fun car to drive, if you know how to drive it, but you’re always driving it near the edge. In some people’s hands it could be a deadly weapon, where the same person would be safe (though maybe still overly aggressive) if they were driving say a Honda Accord.

    So, how much safety do we design in? And how much do we let performance drivers just drive (performance in this case not only meaning raw speed, but also native platform calls, unsafe code, etc)… In the case of 1.0/1.1 legacy apps on 64bit platforms, if we decide to automatically make them run under the 32bit WOW64 it would effectively be a policy decision to dial some safety into the platform. As I mentioned earlier in the blog entry, if we do the reverse, where we float up 1.0/1.1 apps to 64bit, there will be a subset of applications that have incorrectly used the power that the CLR lets them have and are sure to fail spectacularly. There are probably _many_ more that would work flawlessly.

    I want to continue to thank people for your thoughts!! Be assured they aren’t falling on deaf ears.

    -josh

  10. "That said, as a software engineer would you be willing to ship your app for use on a 64bit machine without testing it first? I hope not"

    Well, leaving aside the lack of hardware right now, the monetary constraints that small software houses/individual developers work under mean you can’t always do what you know is right. But that’s another matter.

    I think the problem and shock arises in that we expect the CLR or a JVM to do the isolation. People won’t think about pinvoke, they will assume that because it’s there in the CLR it’s safe to use.

    So, what to do? Well if Whidbey hadn’t been pushed back so far into the future then a bunch of wonderful warnings in the next compiler would have been nice.

    Do we, as developers, want safety? Well, isn’t that what the CLR is supposed to be about? No more pointers, no more forgetting to free, not more overflows. MS doesn’t have the best reputation when it comes to "safe code", so damnit, make it safe.

    As a final thought (hey, it’s early in the UK, don’t expect me to put these into any semblance of order or sense <g>), surely it’s possible to sanity check the assembly before running, stored a checksum away someone if you know it’s safe, and run silently, or question/throw a message to the user if there’s some dodginess under the covers.

  11. Barry — good point about small software houses. And my reply would be that if we go with the option of running 1.0/1.1 apps under the WOW64 in 32bit mode even though they’re on 64bit boxes then we’re doing what you’re asking for? We’re making it safe… Are you in agreement with this? Or am I missing the point?

    In that scenario you don’t have to test on 64bit machines (if your app works on x86 and breaks on the 32bit CLR in the WOW64 then that is a CLR or Windows bug, not a bug in your code). And when you’re ready to test on 64bit machines (they’re getting more prevelant, http://www.hp.com has a AMD64 system (a450e series) at US$720 when I just looked it up) you can and ship a binary that is tagged as 64bit safe (read my prior entry on WOW64 for a treatment of the topic of compile time bitness tagging, the current default implies 64bit safe because most code is) and it will run natively when people run it on their 64bit machines… This gives you the safety now of people being able to use your app the way you intended it be used when you wrote the code, even if they’re on a machine that you haven’t tested on…

    In writing my response here I’ve realized that I’m not really sure which definition of "safe" you mean, in terms of options to go with in this debate, which are you trying to argue:

    a) all 1.0/1.1 apps should run as 64bit native, but the CLR should go out of our way to be extrodinarially careful with them such that no matter what they do they don’t break?

    b) all 1.0/1.1 apps need to run in a safe enviornment, and since they couldn’t possibly have been tested with 64bit machines when they were created running in a "safe" mode under WOW64 and the 32bit CLR is acceptable as they should then run as exepected by the application developer

    c) something else: <please elaborate>

    I’ve been proposing option "b", which I believe gives the "safety" that you desire, but I could be wrong in my assumptions about your desire.

    As for your final thought: yes, but for a number of reasons I generally believe that falls under a potentially "very" fragile implementation category that has the potential to take away from rather than add to the "safety" we’re trying to dial into the runtime.

    p.s. don’t take my S2000 argument to mean that we want to be the S2000 of runtimes… We definitely don’t!! want to be the runtime with the least safety dialed in out of the factory!! But at this point we may not want to be a Honda Accord either.

  12. I’m for safe in this instance, so WOW it. But of course then you loose out on the 64bit goodness.

    Wouldn’t it be possible to look inside for things like pinvoke, and if an exe is well behaved then run it in 64bit mode, for speed/extra memory/cool funkiness. If it looks like it’s doing something bad, then iosolate it and WOW it.

    However … what will happen if you have a 32bit assembly which contains, say, some business objects, and a 64bit assembly wants to use the objects within? Can the 64bit CLR access objects hosting in the 32bit CLR?

    I’d prefer option b. And a free MSN universal subscription and AMD machine for testing please.

    (As an aside, what CLR will Yukon on a 64bit platform be using for its stored procedures?)

  13. 64bit apps running in the 64bit CLR can access apps running in the 32bit CLR, but it will be cross process like any other cross process access to a 32bit process (out of proc native COM component for instance) and will need to be careful about bitness.

    As for Yukon, the 64bit Yukon will use the native 64bit CLR for managed code. This stems from the fact that it is running 64bit native and that the CLR is running in-process.

  14. Ruben says:

    I think the first two posters were right here. The 32bit CLR already knows and checks for verifiable code (type safe, no pointers, no P/Invoke). Shouldn’t verifiable 1.x code be able to run on 64bit without any worries? (If not, I’d *really* like to know why, and others probably with me.) Anyway, wasn’t this the promise made to developers? Isn’t that why Java already runs on 64bit, without recompiling?

    For libraries containing non-verifiable code, a recompile would be a pain, but reasonable; for fully verifiable managed libraries I can’t think of any justification. I’m mainly talking libraries here; .exe’s already bind to their original platform (1.0 or 1.1), so it would be no surprise if they’re treated conservatively and remain 32bit.

  15. So, while I wrote some of the code that controls whether or not exes get loaded into the 64bit CLR when they start up I didn’t write the code that controls the loading of libraries used at runtime by 64bit processes, and it looks like I’ve been writing the wrong thing here in regard to the way we’re leaning with that implementation. I just had the test lead for 64bit stop by my office on his way to somewhere else and say "no… dude, you can load libraries, it’s only exes which we kick into the WOW64" (in reference of course to our current implementation, and the therefore incorrect info I’ve been piping out on this blog <embarassed>). An interesting statement for him to make as when we were debating this issue the other day that was what I was pushing for and I got the feeling he was pushing back… Turns out I got the wrong read.

    I’m going to talk to him in a little bit and clear things up as to what our current (read: one possible and probable implementation of what we will ship in v2.0) implementation is… The size of this product sometimes amazes me… I’ll post an update when I get a fuller story, but given the thoughts I’m seeing shared in this blog this difference will be pleasently recieved.

  16. mihailik says:

    Josh Williams:

    > more trouble than it’s worth

    What troubles did you meant? Only I can imagine is loader-time config searching.

    This is really coding technical task. Of course, development costs, but it is one-time payment.

    On other side, we will have universal flexible solution, that can provide best granularity support for any existing cases.

    Examples.

    If you have 3rd-side library, you can test it 64bit-safety by own and set config flag. Do not need wait for 3rd-side recompile.

    If you are library or app producer, you can easily explicitly define 64bitness rules. This don’t need 2.0 recompile and don’t hurt .NET 1.1 compatibility.

    If you, as producer, have existing app, you can test it and, if it conform with 64bitness, provide policy to customers. Or you can patch some 64bit-unsafe assemblies and send them theese patches with app policy in one zip as package.

  17. Mihailik — sorry for taking so long to respond, things have been busy at work.

    I agree that it’s really a technical task, the questions I have (which I have not spent time researching at this point and therefore can answer) include:

    – how would this change in publisher policy interact with 1.0/1.1 runtimes on 32bit boxes

    – how does this change interact with the 64bit CLR on 64bit boxes (the obvious question) and with the 32bit CLR on 64bit boxes running in the WOW.

    – do we enable only the x86 v. neutral specification? or do we allow for also specifing that a 1.0/1.1 assembly is 64bit only (and yes it is possible to write 64bit specific code in a 1.0/1.1 assembly, PInvoke, unsafe, etc…)

    – how does versioning work with this publisher policy

    And there are more… At this point I just don’t know. But I have made sure that your thoughts are getting to the right people.

    -josh

  18. Whether an app always loads as 32 bit, always 64 bit, or allowed to run on either OS platform should be configurable in the application manifest. You definitely need to provide a way for developers to mark their .NET 1.1 applications as "fully tested for use on .NET64".

    -Danny

  19. Mahavir says:

    This was a Great Article, It helped me finish my paper ! Many Thanks !

  20. Phil Wilson says:

    It seems to me that there are MSI deployment options to consider here. If you install a 1.0/1.1 assembly and mark the installer component with the msidbComponentAttributes64bit bit set, why not make it load as a 64-bit process? It’ll look really bizarre if people build MSI files and mark their 1.1 executables as 64-bit installer MSI components with associated registry entries that will go into the 64-bit registry, and then the executable runs as a 32-bit process and can’t find its registry entries. It also sounds like a 64-bit MSI installer component with an Interop DLL might also write 64-bit Interop registration entries that point to a DLL that will load as a 32-bit DLL, another incompatibility. My proposal is that you honor the msidbComponentAttributes64bit in MSI components containing 1.1 assemblies and allow them to load as 64-bit processes/DLLs, then a conscious decision needs to be made by the user that the assembly will work as a 64-bit process. At the very least, talk to the MSI folks about the implications of a 64-bit MSI component that contains things like 64-bit registry entries, 64-bit custom actions, but with an executable that runs as a 32-bit process

  21. In a number of blog entries I have discussed how on 64-bit machines .Net applications can run as either…

  22. This article covers some 64 bit aspects regarding managed code and COM+ applications. The 64 bit info

  23. This article covers some 64 bit aspects regarding managed code and COM+ applications. The 64 bit info

  24. As I alluded to in my previous post there are multiple ways that we can go in terms of supporting legacy 1.0/1.1 assemblies on a Win64 machine. The context of the 1.0/1.1 support story is made somewhat simpler by the current plan of having both a v2.