Beating the CLR…


In his characteristic style Rico talks about what it would take the “beat” (from a perf point of view) the CLR implementation of generics with hand authored code.  In addition it reiterates some key principles on how NOT to make technology choices.    I’ll add a couple to that:

My mom uses feature X…

I saw Don Box talk about feature X in his underwear…

I hear Chris Anderson talks about feature X for 3 hours with zero slides

Dr. Dobbs did a cover article on feature X….

 

This is true even if X=”Managed Code”.  I mean, really, as much of a managed code zealot as I am,  even *I* would say don’t use managed code if the cost of managed code does not have good cost/value proposition (deal) in your scenario.  I certainly hope the productivity, scalability, deployment capabilities, security, 32-64 bit interoperability, developer support and community, and great IDE all make managed code worth it for your next project.  But if they don’t, by all means don’t use it.

 

What are your bad reasons for choosing a technology?  Where have you opted to not use a feature because the “deal” wasn’t good? 

Comments (6)

  1. Tom Kirby-Green says:

    One area I’m still ever so slightly concerned about with managed code is Working Set size. Look at the delta between the "hello world" C#/WinForms project and a C++ one.

    This is the one reason why I still feel that in some near future where some significant % of running apps will be managed it pretty much requires a 64 bit address space, not because you’re going to allocating oddles of memory per process, but for the extended virtual address space needed to cope with these increased working set sizes.

  2. Is there any verified data about the difference in working sets in real-world application?

    Sure, a 5-line Hello World app would be much heavier in managed code, but is the ratio preserved for larger working sets? Probably not.

    This difference is probably meaningful only in border cases – for very small apps where a small difference are relatively meaningful, and for very large apps where you’re skirting the edges of what your OS/processor can handle, and the differences between managed and unmanaged code can determine whether your app can even run on a 32bit machine or not.

    For most mainstream cases, this will probably not be a make-or-break issue, I feel.

  3. Addy Santo says:

    How about multiple concurrent versions of the runtime? the CLR tends to be a bit greedy with memory, GCing only when resources become scarce. This works great when the CLR can grab oodles of memory and internally manage the allocations, but what happens when multiple versions of the CLR all try to use basically as much as they can grab? Has anyone tested this scenario?

  4. MichaelM says:

    I love the CLR, but there’s one simple reason we can’t make use of it, and it isn’t even at technological reason. None (or at least very view) of our customers have the runtime. If only MS would pull and AOL and get everyone a copy of SP2 and the CLR I’d be happy.

  5. Eric Wilson says:

    "One area I’m still ever so slightly concerned about with managed code is Working Set size….This is the one reason why I still feel that in some near future where some significant % of running apps will be managed it pretty much requires a 64 bit address space, not because you’re going to allocating oddles of memory per process, but for the extended virtual address space needed to cope with these increased working set sizes. " Tom Kirby-Green

    I’m confused how the larger working sets of CLR apps require a larger virtual space. Are you saying the CLR requires most apps to have more than 2 GIGABYTES of virutal memory available? I’ve never seen any application that can back up that claim.

    Even if each application needs 20MB more memory because of the CLR, how does that effect virtual memory? Sure it can have an effect on physical memory usage, but 64-bit by itself ain’t going to help in that area either.

  6. RS says:

    Brad, if you need some background on the value proposition (or lack thereof) of using .NET Framework for rich client apps, give MS Office folks a call. Why did they not switch to .NET Framework yet, if it is cool?

    Please don’t get me wrong, I am a .NET zealot too (and get into lot of arguments with my buddies at work over that) but I don’t think MS considered large to very large rich client Win32 apps in their initial .NET developments at all. All the talks and samples were about producing brand new banana apps that did not explain the complex issues we face in porting Win32 technologies. There was simply no easy, viable migration path for C/C++/Win32/ActiveX/MFC/COM apps to start consuming .NET Framework and show a visible business return in order to justify the investment and sustain it.

    I think Whidbey folks are addressing C++ interop in 2.0 and we are seriously looking at it now.