Just because I don’t write about .NET doesn’t mean that I don’t like it


Some people have inferred that I don't write about .NET because I don't like it. That's not true. I use it myself.

The reason I don't write about .NET is because I'm not an expert on it and there are plenty of other .NET blogs out there, written by people who are actual experts. (Such as Maoni Stephens, whose little finger contains more knowledge about garbage collection than most people have in their entire brain.) No point adding to it with my non-expert view. Indeed, when I hit upon an interesting .NET topic or puzzle, I usually just forward it off to Brad for him to put on his blog. Because people looking for interesting .NET content go to Brad, not me.

The fact that Rico Mariani was able to do a literal translation of the original C++ version into C# and blow the socks off it is a testament to the power and performance of managed code. It took me several days of painful optimization to catch up, including one optimization that introduced a bug, and then Rico simply had to do a little tweaking with one hand tied behind his back to regain the lead. Sure, I eventually won but look at the cost of that victory.

(I'm told there's one company that has decided against using managed code because "If Raymond doesn't even want to mention the .NET Framework then why should we bother to look at it?" What a strange argument. I don't mention IPsec; does that mean you shouldn't use it either?)

But just to dispel the rumor (and to buck both my title and my tag line), I'm going to declare this week to be .NET week. All my technical articles this week will be about .NET. Enjoy it while you can.

Comments (48)
  1. Anthony Wieser says:

    Being an old timer, I’m very comfortable with windows messages and the like, but very uncomfortable with the mapping between them and events under .NET

    While I think I understand the goals, I feel uneasy without seeing the source code (I know I can see some of what’s going on with the ILDASM, but that seems like hard work).  Especially as, if it’s ever to be notionally platform independent using Control.WndProc to do something is probably not the right thing to do.

    I hope you’ll show us how to "stop worrying and learn to love the bomb" this week.

  2. Lauren Smith says:

    While I sympathize with Anthony Wieser’s feeling that something is missing when you hand over fine-grained control to the framework, the benefits of having someone else take the reins so that you can work on the important stuff is very tempting. Hence automatic transmissions, scripting languages, and the VS resource editor.

    If the performance gap is negligble, the only drawback is having to learn the new system.

  3. Tom says:

    As an "old skool" programmer, I am not put out in the slightest by writing custom allocators or other data structures — I find it quite enjoyable, in fact.  I often shun the STL due to its slowness and it’s over-reliance on memory allocation to solve everything.  I will often use Win32 APIs instead of CRT calls in cases where I can get better performance with a custom solution versus the "general case" implemented in the CRT.

    However, I must say that I am most impressed with the CLR version.  The fact that such good performance can come from a simple translation of the original "slow" version of the application is, quite simply, amazing.

    A couple of things I would like to know, though:  what is the memory footprint of both versions, and what was the application startup time?  One of my pet-peeves about Java (another managed and GC’d environment mentioned only because I have no .NET experience) is it’s huge start-up costs in both class loading and initial memory size.  

  4. Wow, what a week this is gona be! I’m looking forward to it :)

  5. Lionell Griffith says:

    I have refused to use MFC because of its countless bugs and even worse documentation.  What it does for me can be replaced by two pages of reusable source code – half of which are comments.  

    .Net appears to be built on top of MFC.  It contains all the bugs of MFC and more.  What’s worse, its implemented by an un-testable 30 meg runtime of uncertain quality (already in its second version with more to come).

    If all you need to do is create nonfunctional kluge software, then I suppose its OK. However, I need to create tight, fast, and reliable code that provides lasting value for users.

    Why would I want to use .NET?  

    PS:  I have been developing Windows software since 1990.

  6. davidacoder says:

    .Net is not build on top of MFC. No relation between the two at all.

  7. Peter Ritchie says:

    Part of .NET is PInvoke.  You’re still doing the .NET community a service by providing clarity to win32 constructs that can still be used in .NET applications.

  8. Matt says:

    Anthony, if you haven’t seen reflector, you probably want to give it a try:

    http://www.aisto.com/roeder/dotnet/

    Also download the FileDisassembler plugin:

    http://www.denisbauer.com/NETTools/FileDisassembler.aspx

  9. Scott says:

    I’ll second that, reflector is utterly invaluable if you want to see what the framework is doing.  It’s even easy to use.

  10. Matt says:

    In response to Tom

    [quote]

    As an "old skool" programmer, I am not put out in the slightest by writing custom allocators or other data structures — I find it quite enjoyable, in fact.  

    [/quote]

    Ok you might find a few bit’s annoyingly out of your control – but a fringe benefit of managed code is lighting fast memory allocation so long as:

    1)you are not bound by zeroing the memory

    2)you don’t allocate memory very often (thus the amortized cost of GC tends to zero…

    If 1) is an issue drop to CLI (or managed C++ if you’re ‘Old School’ ;)

    [quote]

    A couple of things I would like to know, though:  what is the memory footprint of both versions,

    [/quote]

    Depends how you measure it :)

    Considerable savings exist by having more than one app running with .net at once – especially in .net 2.0 where lot’s of effort was expended to reduce the number of private pages. This may be no use to you if you will only ever have on app running on a machine but I notice the benefit.

    Verses Java there is a notable benefit to not referencing dlls you don’t need (possible in java by not referencing some of the jars but these are much more coarsely grained than in .Net). Also Java cannot (natively) share the jitted code in the api. Ngenning (plus storing in the GAC does allow for this – though at a considerable deployment hassle cost if you are changing your code often)

    [quote]

    and what was the application startup time?

    [/quote]

    Much improved – there may be finagling going on behind my back in windows to make this faster than java can do it but the ngenning can make a lot of difference (to the extent of not needing to load the jit code at all if the whole thing has been ngened)

    Even without ngen I notice a significant improvement over java (I haven’t tried 1.5 in so long I don’t know if sun improved it significantly)

    P.S – I have no idea how to verbify ngening (as should be obvious). I apologise the the butchery of the english language this post perpetrates

  11. Gsr says:

    Raymond,

    Can you please change your blog title for this week to be "Not actually a WinFX blog" :)

    Recently i saw a job posting from a company say that "candidates  should dislike .NET" :)

    -Gsr

  12. J says:

    For the lazy and/or apparently helpless, the memory usage of the managed vs unmanaged app is here:

    http://blogs.msdn.com/ricom/archive/2005/05/20/420614.aspx

  13. Lyle says:

    "(I’m told there’s one company that has decided against using managed code because "If Raymond doesn’t even want to mention the .NET Framework then why should we bother to look at it?" What a strange argument. I don’t mention IPsec; does that mean you shouldn’t use it either?)"

    Yes! We can finally deploy IPsec now that Raymond has mentioned it!

  14. Mike Swaim says:

    (Comments on comments)

    Application startup time still seems to be slow with .net 2.0, and .net applications seem to be piggy w/ respect to memory. I believe that part of that is that the framework won’t try to reclaim memory very often if there’s no memory pressure, they can "bloat." If you’re running multiple .net apps, then they can share the framework as shared dlls. (And 2.0’s much better about that.)

    For native code, there’s always Delphi. From what I’ve heard, D2006 is pretty good. Plus with the spinoff, they’re going to be focusing on developer tools again.

  15. Cory says:

    I just have to say that I think there are a lot of people who love .NET … AND who love your blog.

    I am one of them.

    What is the definition of old-skool? Is it someone who has been developing for a certain amount of time? Or is it someone who is likes a particular technology over newer stuff out there?

    I’m probably old school if it’s the first.

  16. BryanK says:

    System.Net.Mail (SMTP AUTH was a requirement)

    You might want to check out http://support.microsoft.com/default.aspx?scid=kb;en-us;555287 — that shows which CDOSYS fields you can set to which values to enable various SMTP AUTH methods with System.Web.Mail.  I’ve set smtpauthenticate to cdoNTLM (=2) in a .Net 1.1 program we use here, to authenticate against Exchange using the currently-logged-on user’s credentials, and it works fine.  (Of course then you don’t set the sendusername or sendpassword fields.)

    Not that this will convince you to abandon .Net 2.0 (better UI and BackgroundWorker sound like a good enough reason to use 2.0 in themselves), but it is possible in 1.1.

  17. macbirdie says:

    Wow, some .NET sweetness on a not exacly a .NET blog. New scratch program coming? :)

  18. Tom says:

    My definition of "old-skool" is someone who has been programming since the mid eighties.  That style of programming typically involved C (or Pascal if you were into that sort of thing) and Assembly.  There was no OOP (unless you were using Smalltalk, but you couldn’t find that on most low-end PCs available then), so you were stuck with structured programming.  Libraries were hard to come by, so you had to write everything.  Memory was tight, so you had to write small, or use overlays or EMM.  These modern day "Operating Systems" make it too easy.  In my day, you memorized the algorithtm to compute square roots and LIKED IT!

  19. Steve Kemp says:

    Personally I’m an old-school programmer in the sense that I used to write bits of code using the Win32 API a good few years ago and only flirted with the MFC towards the end of the time I developed under Windows.

    Nowadays I’m a sysadmin rather than a developer, and I’ve switched to Linux based systems.  (After a brief segue into Solaris-land).

    Perhaps it isn’t suprising that I’m not so interested in the "newer" technology given that I don’t have the time or the current grasp of the Windows world to use it.

    But I do read this blog (and a couple of other MSDN people like Jenson Harris/Larry Osterman/Eric Lippert) precisely because Raymond explains the historical things so wonderfully well.

    Several of the "bug" discussions (or "don’t do it like this") are things I can recall fighting in my past life.  I’m also incredibly impressed at the efforts Microsoft makes to allow backwards compatability and find the evolution of different systems, such as the recently covered DLL pieces/linker, very informative.

    If we’re honest 99% of developers won’t care how linkers used to work in the 16 bit world, but for me it is both educational and a little bit nostalgic.

    Steve

  20. Dave says:

    For the past five years, Microsoft has been working hard to ensure .NET performs well and offers all the functionality of Win32. It would be possible to improve the performance and environment for unmanaged Win32/C++, but that’s not in Microsoft’s plans. Unfortunately, all the competition is pretty much dead now, so Win32 development is in stasis–which has both good and bad implications.

    I can ship a Win32 product that weighs only a megabyte or two but does wonderful things. It can leverage platform components like WebBrowser and allow customization through script engines. I can find plenty of examples all over the net. Sure, those examples and APIs have bugs, but both have been around long enough that the problems are *documented*.

    If I tried the same thing with .NET, the first decision is which version of the runtime to use, the widely deployed 1.1 or the emerging 2.0? If I head for the bleeding edge, I can be sure that many if not most customers will not have the latest .NET runtime, so my duty will be to haul 30MB of Microsoft bits over to their system. Then I will deal with the inevitable bugs that spring up with fresh Microsoft code that is just seeing the light of reality.

    I’m staying old-school for now.

    [In that case, you should probably unsubscribe for this week. See you next Monday! -Raymond]
  21. Tim Smith says:

    I guess I’m old school.  Programming professionally since 1985.  

    I have used WinAPI, MFC, ATL, WTL, OWL, Borland’s C++ of builder, straight C, Forth, C++ with and without STL.

    I am finding .NET to be a very nice framework to write software.  I just wish 2.0 was around two years ago when we decided to create our current software in MFC.

    I just use what is best for the company that pays me.  I don’t care what it is as long as it gets the job done well.

    BTW, I don’t have problems with bugs in MFC.  Over the years, the people who seem to have problems with MFC (and any other framework) are those who try to bend (and usually break) a framework into doing things your way instead of its way.

  22. *A couple of things I would like to know, though:  what is the memory footprint of both versions, and what was the application startup time?*

    I have a basic app which doesn’t do much in .NET 2.0 with a memory footprint hovering at about 26 megs, running for about 6 days straight now. It’s a GUI app that uses a timer to poll a database and send email notifications. Yes, I violate a good bit of the Raymond rules of conduct but it serves it’s purpose at this point.

    My understanding is that the first time .NET code is ran the CLR is loaded and stays resident for X so that after the first application is loaded, subsequent applications take no time at all to fire up. I’m no expert and this is only what I can remember in my little noggin. I may have it totally wrong in that it just prefetches and appears to be quicker. The CLR is loaded once and shared among all .NET applications running on that platform too, so no duplication in the memory footprint there.

    My reasoning for picking 2.0 was pretty simple. Better UI controls, BackgroundWorker, System.Net.Mail (SMTP AUTH was a requirement), and the fact that I could literally get away with it. 30 meg installer? The company I work for has a T1 so 30 megs is chump change. 2.0 HAS been tested, by those companies who are early adopters. VS 2005 had more community feedback than any other MS product I know of (though I believe it was the first that exposed it’s bug database to the world) so it was clear someone out there other than myself was dogfooding this stuff EARLY. Hell, VS 2005 was largely built with .NET code so they had to make sure the foundation was pretty solid to begin with (though I’m sure people will argue "That’s why 2005 has so many issues").

    There are some things managed code will not be able to do well now, or may be able to do at all. There will always be late adopters and people holding onto Win32 for dear life. Microsoft’s stance on backcompat will most likely make sure those people have a job for at least another decade if not more. Most of the people not even willing to get their hands dirty with .NET are doing so for religious reasons. You don’t have to use it in production but why not get your hands dirty with something simple that only appeals to you? You can stay current by updating your code and just witnessing how it fairs instead of just saying (insert new MS technology here) isn’t quite what I’m looking for.

  23. Norman Diamond says:

    Monday, July 31, 2006 5:24 PM by Tom

    > My definition of "old-skool" is someone who

    > has been programming since the mid eighties.

    LOL.  That must be the new old thing.

    > That style of programming typically involved

    > C (or Pascal if you were into that sort of

    > thing) and Assembly.

    C and Pascal weren’t invented yet at the time of "old-skool" programming.  Lisp was, but it depended on lots of impractical expensive operations like garbage collecting so no one could really use it.  The style of programming was either Cobol (if you wanted a good salary) or a combination of Fortran and Assembly.

    Then IBM invented PL/I.  They polished up PL/I very nicely both technically and in documentation, so everyone was forced to move to PL/I and had to forget about Fortran and Cobol.

    Building on the success of IBM’s experience, Microsoft polished up C# so everyone was forced to forget about VC++ and VB.

    Lots of end users won’t have to download the .Net framework because it’s built into Smartphones and Pocket PCs.  So theoretically some of us could use .Net without worrying about the impact on customers.  But VC++2005 refuses to target it.  So some of us either really do have to choose between moving to C# or continuing to study middle-aged-skool Win32 style programming.

  24. Tom says:

    @ Norman Diamond:  Not invented yet?  They’ve been around since the 70’s!  Unless you’re thinking Ancient School which most certainly was FORTRAN and Assembly (probably some straight machine code, too — for the hard core in the bunch).  I have actually had the mispleasure of working on some FORTRAN from the 50’s.  Granted, it was a text-file port of the original Hollerith punch cards, but it is just as spaghetti today as it was back then.  Those were the days…

  25. Lionell Griffith says:

    "Over the years, the people who seem to have problems with MFC (and any other framework) are those who try to bend (and usually break) a framework into doing things your way instead of its way."

    I don’t know about you but for me the problem to be solved determines the solution and not the other way around.  

    MFC says all problems are documents to be processed.

    .NET says all problems are documents to be processed on the internet.

    Highly man-machine and machine-machine interactive  engineering, scientific, real time, process control, image processing and the like appications are NOT document processors.

    For the few problems I have that are documents, I can use Word, Excel, Power Point or other off the shelf software.  

    Again, why would I want to use .NET if it doesn’t match the problem set I must solve?

  26. Dean Harding says:

    MFC says all problems are documents to be processed.

    > .NET says all problems are documents to be processed on the internet.

    While that may be true of MFC, .NET is more than just WinForms (and WinForms is more than just "MFC.NET" – I’d say that WinForms maps more directly to the native controls than MFC does… sometimes painfully so).

  27. josh says:

    “lighting fast memory allocation so long as:

    2)you don’t allocate memory very often”

    Woot.  ;)

    [It’s actually the other way around. GC is faster the more memory you allocate that has a short lifetime. -Raymond]
  28. Chris Nahr says:

    ".NET says all problems are documents to be processed on the internet."

    This is so laughably wrong. Your idea that .NET is somehow the same as MFC has no basis whatsoever in reality. Do you think you could stop making a fool of yourself in public forums?

  29. nksingh says:

    Chris Nahr:

    Ouch!  I’m sure you showed that guy!

    Geez, chill out.

  30. Chris Becke says:

    I consider myself a newbie on the programming scene. I have been programming c & c++ for merely 10 odd eyars now.

    During the time ive been ignoring, or actively avoiding c# and .net, its gone from 1.0 to 1.1 to 2.0 – each revision being incompatible with the last.

    Perhaps I will switch when it shows some signs of stability.

  31. XRay says:

    > During the time ive been ignoring, or actively avoiding c# and .net, its gone from 1.0 to 1.1 to 2.0 – each revision being incompatible with the last.

    >> .NET says all problems are documents to be processed on the internet."

    It’s funny to see how many urban legends spread over .NET ..

    All deprecated APIs are still supported by 2.0 and work ok. And above all, they will not require any major rework to reimplement with the newer versions.

  32. Iain says:

    Old-School? Phaw!  Real Programmers use FORTRAN!  Actually, Real Programmers write in machine code.    See: The Story of Mel:

    http://www.pbm.com/~lindahl/mel.html

  33. Lorenzo says:

    I have read through Raymond’s frankly fantastic blog for a few months now, and have always wondered how he feels about .NET being touted as win32’s successor.

    Raymond, do you feel win32 is obsoleted now, or is there still a place for it in Windows Development (aside from porting code previously written for the win32 api)?

  34. Nate says:

    .NET says all problems are documents to be processed on the internet.

    Can you elaborate on this?  .NET has none of the document-centric bloat that is in MFC.  Sure, it would be easy to add, but when you make a laughable statement like this, at least try to back it up.

  35. nikos says:

    i guess it’s a natural reaction of old dogs to revolt against anything new that overrides all one has spent so much time and effort to learn :)

    but as far as windows programming goes, one will always need a knowledge and understanding of low down procedures, if only to fix/workaround the plethora of bugs and features present in the API, windows controls etc.

    only today i discovered a fix for that age old scrolling mess of listview control with gridlines under XP, which involves overriding scrolling messages. Can you do such patches natively in .NET? i wouldn’t have thought so & i’m sticking with C++/WTL for as long as i can :)

  36. BryanK says:

    nikos — yes, I believe you can.  You can make a class that inherits any existing control’s class (e.g. ListBox, or Form), then override the protected WndProc member function.  (As long as you use WinForms, that is — you can’t override WndProc in the Compact Framework, at least not in v1.1.)  Make sure you call the base class WndProc as needed, though; otherwise everything else breaks.

  37. jstanley says:

    A few years ago, my company was looking at using "Managed C++" to write the user interface for a product.  This UI would have to be compiled against thousands of lines of portable C++ backend code (which we compile regularly using VS71 and gcc).  After whipping together a barebone UI, I was puzzled by odd behavior that made me question my sanity.  A C++ function returned false; the UI acted as if it had returned true.  Eventually I examined the assembly output, and found this was a bug in the compiler–C++ thinks a bool is one byte, and sets the return value in AL, while .NET thinks a bool is 4 bytes, and examines the entire contents of EAX.  It is beyond me how something as trivial as bool marshaling could have been broken in a released product–and I was even more amazed to find that the bug had been known about for TWO YEARS and not fixed.  My conclusion was, the .NET framework is a toy, not ready for prime time.  Have things improved any in the past two years?

  38. That was why I didn’t use IPSec, so it’s okay to use then?  ;)

  39. Michiel Salters says:

    GC is faster the more memory you allocate that has a short lifetime. -Raymond

    Popular misconception, but you can wrap a deterministic memory allocator interface (malloc/free) around a GC one. malloc just gets the memory. free adds it to the list that the GC considers for collection. Because that set now doesn’t require trawling, the GC doesn’t have to work as hard. It’s also more likely to be in cache. The GC can still clean up the free()d memory in its own background thread.

    True, this is not the MSVC CRT implementation of malloc/free, so GC (in some cases) may be faster than that, but it will be slower in others. I guess the current CRT is tuned for smaller and less allocations. If the GC allocator turns out to be better in all cases, its algorithm is also available to GP allocators (GP=Garbage Prevention)

    [That was obviously a generalization on my part. Of course you can come up with scenarios where it fails to hold, but the question is whether the effort to reach that point is worth the benefit. -Raymond]
  40. e.thermal says:

    > .NET says all problems are documents to be processed on the internet."

    I thought I heard it all, until now, that has to be the worst conceptualization of a framework I have ever seen.  A large part of my job is architecting/designing solutions to business problems with technology.  A large part of that is dealing with misconceptions like "My bank uses it, so it must be good?".  In my role I have to be very non-theistic in my solutions and since .net has come around I have had a very hard time finding a reason not to use it.  There are some very good technical reasons for not using .net and using c++ native or some other solution like java, but those situations are so rare in the businesses that I have worked.  

  41. That one Ian guy says:

    Oh no you didn’t!

  42. DriverDude says:

    "What a strange argument. I don’t mention IPsec; does that mean you shouldn’t use it either?"

    Actually yes, according to my network admin friends, I shouldn’t use IPsec (there are better and easier choices, they say)

    People will cling to the most foolish reasons to justify their beliefs. Most people I know of who disdain .NET have not actually used it. Some use MFC, so it’s not as if they’re all anti-Windows. Go figure.

  43. Chris Becke says:

    @nikos – By the time WinFX hits, well, it *looks* like enough stuff at that point is going to be written natively in .net that you will have to do a substantial portion, if not all, your hacking inside the .net framework.

    I, personally am hoping that WinFX will be built upon a rational (but secret) C API, but I don’t hold out that much hope for that. A full ground up implementation in .NET however is a very good second best, because the current hybrid system us pooer c/c++ developers are exposed to is, well, appalling.

    The current situation, where Com controls – based in theory on "nice" OO friendly interfaces, cannot hide the fact that, at the core of every interface exposing COM control, is a HWND driven WindowProc. The obsessive compulsive core of my being – which on one hand gave me the drive to be a programmer – dispairs that COM was never finished enough to totally abstract away the non-COMness of (COM)controls.

  44. Matt says:

    "lighting fast memory allocation so long as:

    2)you don’t allocate memory very often"

    Woot.  ;)

    [It’s actually the other way around. GC is faster the more memory you allocate that has a short lifetime. -Raymond]

    I phrased it wrong – I meant that, like any other memory allocation method with a finite pool if you avoid actually allocating memory it’s quick :). Even given that any allocation in gen 0 which doesn’t fill it is very fast (since it is basically incrementing the pointer by the amount of memory used) and, if your use of new objects is sufficiently small you can avoid even the (very) fast gen 0 collections once you hit steady state.

    .Net make this *much* easier to achieve.

    Obviously whether this is a useful thing to do is dependent on your app. If you are unwilling to take the occasional hit of a full GC (especially under load) when responding to something like say, market data ticks then structs and if need be use of stackalloc can make avoiding new much easier than say java where many api’s force you to trigger memory allocation as they have no other option if the available primitives don’t fit their problem domain…

    In my experience I find the occasional log statement triggers some allocation but the gen 0 collections this causes can be soaked up easily without any measurable affect on throughput and latency in my app. I expect no more than 1 gen 2 collection in 3-4 hours of use – and this is with a steady allocation of immortal objects.

    This is all without fancy tuning parameters to the runtime (something I was always forced to do when handling vm’s in java over 512MB)

    I used to really like java for it’s simplicity – ever since I have had to write higher performance code I have loved c# and .net.

    Matt

  45. ricom says:

    >>I expect no more than 1 gen 2 collection in 3-4 hours of use – and this is with a steady allocation of immortal objects.

    I don’t know that this pattern is *typical* but it is achieveable.  I’ve helped several fortune 50s to do just this for real-time data processing… it’s all about managing your allocation rates and making sure you have good lifetime.

    I’ll be talking at Gamefest (http://www.microsoftgamefest.com) about doing the same thing in the context of a game.

    It’s not that hard to do, it just needs to be part of your thought process.  And under these circumstances garbage collected memory is a sweet deal.

    After gamefest I’ll write a blog covering the topic as well.

       -Rico

  46. Norman Diamond says:

    Monday, July 31, 2006 9:51 PM by Tom

    > I have actually had the mispleasure of

    > working on some FORTRAN from the 50’s.

    Then you KNOW why I LOLled at your assertion that 80’s programming was "old-skool".  Same goes for the 70’s.

    At least when a card reader destroyed my file it let me know which file it was destroying.  If it was a source file I could punch replacement cards, and if it was an object file I could recompile the source.

    On the other hand, you have more experience than I do!  I’ve only had to READ Fortran from the 50’s.  The oldest I had to upgrade was from the 60’s.

    Tuesday, August 01, 2006 10:56 AM by jstanley

    > I was even more amazed to find that the bug

    > had been known about for TWO YEARS and not

    > fixed.  My conclusion was, the .NET framework

    > is a toy, not ready for prime time.  Have

    > things improved any in the past two years?

    No, the .Net framework isn’t a toy, Microsoft’s handling of bugs is a toy.  Microsoft’s treatment of customers is not ready for prime time.  No, things have not improved in the past fifteen years.

  47. Matt says:

    regarding ricom’s reply – I look forward to that immensely.

    Agreed the scenario is not typical.

    Amusingly this kind of usage occasionally makes me pine for macros (and I *hate* reading macros in c / c++) for the common operation

    if (Log.IsDebugEnabled)

    {

       Log.Debug(/* some thing that, to be useful will inevitably build a new string*/);

    }

    becoming

    LogDebugIfEnabled(/*the code*/);

    I know this makes me a bad person.

    Any chance of a lazy keyword

    public void Foo(lazy string text)

    {

       if (/* I don’t need to do this */)

       {

           return;

       }

       /* do something with text. */    

    }

    obviously you would need to use something like

    Foo(lazy { /* expression evaluating to s */ });

    Ensuring that the only reference was inside the method which triggered it’s evaluation.

    Obviously all sorts of issues with executing contexts and multiple threads (not to mention the work required to pass the lazy reference itself).

    Come to think of it this idea probably makes me a worse person than wanting macros :)

    p.s. This post is not terribly serious

  48. Michael J. says:

    For native code, there’s always Delphi.

    > From what I’ve heard, D2006 is pretty good.

    > Plus with the spinoff, they’re going to be

    > focusing on developer tools again.

    Delphi was and still is the best IDE + library for Win32.

Comments are closed.