“Managed” vs. blow your bloody foot off!!! er.. I mean “unmanaged”


I got a response from someone saying:

“Cyrus-

I’ve been reading your blog for about a week now, and you seem to be in touch with the programming community. I have found myself torn between MS tools and open source tools. I really, really, really like Visual C++, but as time goes on it seems that Microsoft is trying to push us to managed code and away from native code. The new C++ express beta doesn’t even include the Win32 platform SDK!
My question– why should I switch to managed languages? What is better about C# and the managed philosophy in general?”

Well, first off. I thought I’d address the part on the Win32 PSDK. The express skus were explicitly intended to be “lightweight”. i.e. with a minimum amount of fuss you could be on your feet learning how to or just developing software. The last time I checked the minimum platform sdk download was 180 MB. That’s ~6 times the size of just the express sku. Note: you can still download the PSDK and have it be 100% supported in the C++ express sku. However, it’s my feeling that it was not included so that people wouldn’t have to download 200 MB instead of 25 🙂

As to whether or not MS is trying to push people toward managed code… Well… that’s a toughie. I’ll give you my my opinion on it, and then how I think MS looks at it as well.

First: I’ll admit right off that I really prefer managed languages to unmanaged ones (for the sake of brevity I’m going to use C++ and C# for unmanaged/managed). With the work I do I see absolutely no benefit from using C++. I spend much of my time managing resources that I just don’t care about, and when something goes wrong (which it inevitably does since i’m human) the amount of pain I go through trying to find the problem is enormous. If anyone out here has ever tried to figure out a heap corruption, you know what I’m talking about. There are tools that can help out a bit in these cases, but sometimes you’re just on your own and it sucks.

Also, what drives me nuts is that say I might code up all my stuff perfectly (not going to happen, but for the sake of argument), with no memory corruption and whatnot. Even then, it is quite possible for some other bit of code in the enormous app that is VS to make a mistake and trounce on memory that I’m using. Poof, we go up in a puff of smoke and we’ve ruined things for the user.

Back in the days of Dos/Win3.1/Win9x/OS-pre-X we had this problem with full blown applications trudging over each other. It was a terrible situation and led to horribly unstable systems and crappy user experiences. We’ve since pushed that problem into the individual application space. But it still exists. Realize that all it takes is one byte miswritten and suddenly your entire app is in an untrustworthy state where it could crash and any moment (and trying to determine why can be enormously difficult). We get Watson dumps some times where we literally cannot understand how the system could have gotten into such a state. This is usually why we wait until we have a few “hits” (i.e. multiple crashes in the same place) before doing a deep investigation. This is because it’s possible you’ve only crashed in one location because some errant thread misbehaved a long time earlier.

I say this with no amount of hyperbole, but i am 1/10th as productive in C++ than I am in C#. I’ll flip that for more impact: I’m 10 times more productive in C# than in C++. This is partly to do with the fact that I just don’t have to be concerned with these issues every moment that i’m developing. But it’s also due to the enormous hit in productivity that happens when something really bad happens with c++.

I read a quote one time (which i can’t find now) that said (in effect) “it pains me to see so many finger-less developers out there.” It’s something I agree with. Unmanaged languages put so much unnecessary burden on the developer for what?

Some people argue that this “getting closer to the hardware” is necessary for performance and it’s a sacrifice they are willing to make. However:

  1. I don’t believe that that’s the case
  2. Even if it is the case, the loss in perf is something I am willing to take given the enormous boost in productivity and safety

The reason I don’t think that performance is impacted is because I’ve seen the amazing work done in languages like OCaml to get amazing performance out of a completely managed language and I’ve also seen what the CLR guys have been able to accomplish. Note: I also am a firm believer that a managed language can have better performance than unmanaged languages because the runtime/compiler can assume more. In a system with raw memory access, you basically can’t trust anything. Any integer might be a pointer to something that exists and you don’t know if it’s safe to do anything. In managed languages you have so much more freedom for optimization. And, as I said, even if there is a perf degradation, I don’t really care much because it’s not something that a user of my application would notice and they’re far more benefitted from my productivity.

Whew… that was a lot so far. Now, onto what MS thinks. First, I wanted to mention Trustworthy Computing (TWC) (highly recommended read). There a lot to it, but some of the core tenets for me are “Safe by design” and “Safe by default”. To me C++ is neither. C# attempts to be and does a lot better job of it than C++. In the cases where you can start doing unsafe things, well… you have to declare them unsafe. This, of course, means that your code is unverifiable, and that protections added to the OS can insure that those applications cannot commit bad actions (or be pwn3d, etc.). Another tenet is “accuracy” which relates to keep user data safe. It’s enormously hard to do that in a C++ system when your app can get corrupted at any single moment. once the app gets corrupted you can make absolutely no guarantees about the data it holds and you aren’t able to ensure that user data is preserved.

Second, i think will all the work that’s been done on longhorn, it’s fairly clear that MS is taking a pretty managed-centric view of the future. Managed languages will be first class citizens in the OS and almost all capabilities will be available to them. Note: this does not mean unmanaged languages are left out. There is a strong push to make these components available to both sides. That was one of the great features of .NET. I can write a managed class that you can use from your unmanaged one. I think you’ll see with Visual C++ 2005 that you have an enormous amount of power and the ability to mix and match unmanaged/managed code to your hearts content.

Finally, as to open source tools. Use whatever tools make you a better programmer. Also, i don’t think that there’s anything that limits you from using both together to make the best software possible. I certainly have no problem using open source tools, and I’m a big fan of Services for Unix” which allows me to run all of the command line tools I was used to on my school unix systems.

Did that help answer your questions?


Comments (48)

  1. Brian Jorgensen says:

    Yes, great answer, and much longer than I expected! I agree with you on most of it except for a few things:

    1. The PSDK size issue does make sense. However, a win32 programmer doesn’t need the whole sdk…we just need the windows headers and libraries. Something that bothers me is that one of the included templates won’t compile due to the lack of win32 headers and linkage.

    2. "Safe by default"– I don’t know that I buy into this. Why should MY crappy code work? The fact that I don’t get a crash when I mess up can mean larger bugs as the project progresses.

    3. "it is quite possible for some other bit of code in the enormous app that is VS to make a mistake and trounce on memory that I’m using"– how is this any different in memory managed languages? The memory is still being allocated and released, so how is this affected by language? And if it IS a bug in the IDE, what keeps VS/C# from doing the same thing?

    4. You can call it psychological, but the fact is that lower-level just FEELS better to some developers. Being able to directly decide what the processor does (umm…yeah, i know I’m walking on thin ice here) is comforting to me as a developer. I know that if there is a mistake, it is MINE and not the language’s.

    Thanks for the debate. It’s always fun and educational.

    Regards

    Brian

  2. Brian:

    1) That’s fantastic information. Please file a bug on #1. If templates wont’ compile then you’re in a bad shape 🙂

    C# express pulls down the minimal .net framework in order to be usable, it is certainly reasonable to expect the same of C++

    3) In c# no other thread or dll can mess up my memory. You don’t have raw access to it. You can’t accidentily just write one byte past the end of an array (or into some random memory address) have that succeed and have the system be in an unknown state.

    4) If it feels better to you. hey all power to you 🙂

    I know people who enjoy assembly. However, I (myself) am extremely unconcerned with such issues. TO me lower-level ‘feels’ worse because I know have to take care of issues that I know I will screw up no matter how careful I am, and these issues will just sink the ship period. Any issue I can remove and leave in the hands of a system that will do basically as good as I would have but with the benefit of far fewer bugs, then I’m happier 🙂

    However, this becomes an issue once you work on a project with more that one person. Are you ok with writing fantastic code and then having it break because someone did a double free? etc.

    As I said, you are 100% free to keep coding in a completely unmanaged matter. however, I believe that for the quality of apps to improve developers need to stop wasting time micromanaging things.

    Let me ask you a question. Do you ever jump down into assembly in order to maange registers or to hand optimize how parameters are passed to/from a particular function? (maybe you do). I would guess not. Why? For me there are a few reasons.

    a) It’s error prone

    b) The benefit of that just probably isn’t that high.

    I see the exact same problem with C++ vs C#. The things I need to handle in C++ are error prone, and there’s not real reason to do it myself when the runtime can do a better job of it.

    An example of this is ref counting. We currently have a ref-counting scheme in place to help us try to deal with memory in a semi-sane manner. Of course, this scheme has certain restrictions. One of which is that you cannot allow circular references. But circular references arise so naturally in a lot of coding that we do. Note: when I say "a lot" I really mean "a lot". I would estimate that with perhaps 75% or more of the system natural circularaties would emerge. However, if we allow that to happen then our ref-counting scheme breaks and we leak memory. So, because of this, we are forced into a few choices:

    a) Detect circles automatically. Really expensive and we don’t really know how to do it properly

    b) Detect circles by realizing where they can exist, and not using a ref-counting pointer in that location. This is incredibly bad because now we have a pointer to something that might go away at any instant.

    c) Detect circles by realizing where they can exist, and rearchitecture things so that a circle cannot exist.

    ‘a’ is something we aren’t able to do. And if we tried we’d probably get it wrong (and would just be reimplementing CLR work) . ‘b’ really sucks because it can lead to buddy code. ‘c’ really sucks because now I’ve had to hork my nice clean design into something ugly because the system can’t handle something this simple.

    Note: I’m writing part of an IDE. Not a memory manager, or a low level OS service/device driver. An IDE has _nothing_ to do with memory management. So why the hell do I have to think about it??

    2) "Safe by default: Software is shipped with security measures in place and potentially vulnerable components disabled"

    By default unmanaged languages are unsafe. You cannot verify them. You can exploit bugs in them to end up executing any code you want, etc. These are things you cannot do in a managed language. Alternatively, managed languages can be verifiable and proven to be safe from certain classes of runtime bugs.

    That’s in line with "safe by default".

  3. I always want to know what will the processor do on my commands. The cause is the customer requires me to force his processors to do something. My job is making processors doing exactly what the customer requires. My customer does not ask me to write a good program in C++ or C#, but he|she asks me to make his|her processors do some work and display results. With that I’m much happier than those guys who are forced to use specific languages and technologies. I usually have a chance to choose the technology or mix some of them. And my choice is the language providing tight control with least possible trouble. Thus I’d like to use a single simple language providing seamless (I know this word too!) access to all the components on my customer’s machine. Now I try C#. It’s useful enough except disadvantages of the .NET framework itself. C# provides good level of control over .NET framework and that is what I need. Maybe inline IL… That is not a good idea. What about defining methods of a single class in different languages? Interop should help, but it’s not as simple as writing one method in an IL-file while all the other methods of the class in a C#-file.

  4. Lee Alexander says:

    I also prefer working in a managed environment but I was wondering, perhaps your X10 productivity increase claim was slightly skewed through the rose tinted lenses of *new* code syndrome. Something as large as VS.NET must have code going back many years and through that time once "perfect" code gets ugly; cracks show up as unforeseeable requirements gnaw on it much like free radicals do on us.

    Perhaps new code will always feel better and as a result you will always feel more productive working around it….

    Regards

    Lee

  5. Lee: No. it’s not skewed. It is a measure of how productive I am now writing any code in C++ vs the same code in C#.

    I’m more productive in 4 year old java code that I am in 2 week old C++ code 🙂

  6. Lee Alexander says:

    Fair enough. Keep up the good blog!

    Regards

    Lee

  7. Dr Pizza says:

    "I spend much of my time managing resources that I just don’t care about, and when something goes wrong (which it inevitably does since i’m human) the amount of pain I go through trying to find the problem is enormous. "

    I did have a nice long reply at why this is wrong thinking; C++ written /as C++/ (and not simply C-with-classes) makes resource management of important resources (i.e. not memory) much simpler than in GCed environments like C#-on-.NET.

    But then SharpReader destroyed it, because it gets the scope wrong when switching to the application (it always puts the cursor in the top panel, not this one).

    So, I’ll just post a short response: you are wrong, RAII rules, GC is inadequate for resources other than memory, and C# takes away useful resource management tools.

    "but… but… but… using!" Using bollocks. Finalizers are way to complicated to expect me to write them myself, courtesy of resurrection.

  8. Actually. Dr. Pizza, RAII absolutely destroyed a bit of code I had written earlier _because_ of resurrection. Whee. Of course, in the C++ world this was painfully difficult to track down because of course, the memory was still pseudo usable, and only later when the system had reclaimed it for some other purpose would we crash.

    You don’t need a finalizer to get good RAII semantics in a language like C++. You just need a dispose method that will return the unmanaged resource back to the system. If you’re messing around with finalizers, then you’re entering into a world of hurt 🙂

    BTW. can you name one important (non-memory) resource that is harder to manage in a managed world? I, personally, can’t think of any.

    Note: RAII helps not at all with all of the reasons why I dislike C++. RAII doesn’t stop a single miscreant write call. It doesn’t stop a double free corruption. It doesn’t stop a circular reference.

    You say that memory isn’t an important resource. I completely disagree. It’s the most important resource. If you step out of line once while using it you no longer have any guarantees about the state of your system whatsoever. That your program is evening running at all after a mistake with the memory system is a scary fact and what you might be doing to a users’ data in that time is just unnacceptable.

    Seriously. I need some information on how:

    class ResourceCleanup {

    Resource resource;

    ResourceCleanup(Resource resource) {

    this.resource = resource;

    }

    ~ResourceCleanup() {

    Cleanup(resource);

    }

    };

    is any better than:

    class ResourceCleanup : IDisposable {

    Resource resource;

    ResourceCleanup(Resource resource) {

    this.resource = resource;

    }

    void Dispose() {

    Cleanup(resource);

    }

    };

    and then:

    ResourceCleanup rc(resource);

    vs.

    using (ResourceCleanup rc = new ResourceCleanup(resource))

    Bollocks isn’t an accceptable responce 🙂

    Note: JayBaz wrote a good piece on why you shouldn’t use a finalizer and the code you’d want to write that would indicate that you’d used a resource improperly (i.e. didn’t dispose of it when you should have). using his pattern you should get feedback immediately when using your objects that you are doing something wrong. You get RAII semantics, and you don’t have to worry abuot finalizers and resurrection oddities.

    Note: I still stand by my original statement. In managed languages I spend a ridiculously smaller amount of time managed any resource that I do in C++. Now, the majority of that time is spent managing memory. however, any other resource also takes far less time to manage in managed code than in unmanaged code.

  9. Dr. Pizza: I was a little brief on the code I wrote down. Actual code would do a bit more checking to make sure you didn’t double dispose etc.

    But hte basics are there in their raw form. Again, I recommend checking out:

    blogs.msdn.com/jaybaz_ms for code that shows how to ensure that when users aren’t handling a resource appropriately, then know about it when developing and running their app.

  10. arhra says:

    well, if you’re using multiple different types of resources, you end up with nested using(){…} blocks, which gets to be a serious pain in the arse. It’d be quite nice if there was a way to declare a variable, and have it auto-disposed at teh end of the current scope (which is how C++/CLI handles RAII semantics, i believe). You obviously wouldn’t want it to be the default, but maybe recycle another old C keyword, and use ‘auto’?

    Overall, though, i have to agree with Cyrus. Managed is vastly nicer than unmanaged, although decent C++ with lots of smart pointers and such like isn’t too bad, at some point you have to deal with stuff other people have written, at which point you get raw pointers flying all over the place, and you’re back in blow-your-bloody-foot-off land again. You still have to deal with the fact that smart pointers aren’t all that great, though (yay, reference-counting automatic memory management. Bugger, i need a circular reference, back to raw pointers and managing it myself).

    So, yeah. You’re right, PeterB is wrong. :judge:

  11. Dr Pizza says:

    "Actually. Dr. Pizza, RAII absolutely destroyed a bit of code I had written earlier _because_ of resurrection."

    No it didn’t. In the RAII world there is no resurrection. You get a nice simple crash instead of having to figure out how resurrection works.

    "You don’t need a finalizer to get good RAII semantics in a language like C++. You just need a dispose method that will return the unmanaged resource back to the system. If you’re messing around with finalizers, then you’re entering into a world of hurt 🙂 "

    If I want to emulate the "always called unless you call exit()"-ness of C++’s destructors, I need finalizers.

    "BTW. can you name one important (non-memory) resource that is harder to manage in a managed world? I, personally, can’t think of any. "

    HANDLEs, GDI handles.

    "Note: RAII helps not at all with all of the reasons why I dislike C++. RAII doesn’t stop a single miscreant write call. It doesn’t stop a double free corruption. It doesn’t stop a circular reference. "

    It does stop a double free; if you never write ‘free’ (or ‘delete’/’delete[]’) you can never have a double free (or double delete/delete[]). Destructors fire exactly _once_. Calling ‘delete’ should be /extremely/ unusual.

    And frankly, miscreant writes are /so/ easy to avoid (by just /not writing them/) I can’t honestly believe you’re bringing them up. I can’t even envisage how you could do such a thing by accident.

    "You say that memory isn’t an important resource. I completely disagree. It’s the most important resource. If you step out of line once while using it you no longer have any guarantees about the state of your system whatsoever."

    I’d rather clobber a buffer in an application than run out of GDI handles. At least clobbering a buffer will tend to be limited in the scope of damage it can cause. Running out of GDI handles typically causes /other/ applications to start failing.

    "Seriously. I need some information on how:

    is any better than:

    and then:

    ResourceCleanup rc(resource);

    vs.

    using (ResourceCleanup rc = new ResourceCleanup(resource))"

    Aside from being correct (your Dispose method is faulty, as your next post concedes), being less typing, never leaving an object in an unusable state, and not requiring a new scope to be created for each new used type, you mean? That I can manage resources in an extensible, generic way is an added bonus.

    "Note: JayBaz wrote a good piece on why you shouldn’t use a finalizer and the code you’d want to write that would indicate that you’d used a resource improperly (i.e. didn’t dispose of it when you should have)."

    I shouldn’t have to dispose of anything unless I have peculiar needs for its lifetime. With RAII I get the right behaviour 95% of the time by default; with GC I don’t. I don’t often do the things that GC is good at (for example, returning arbitrary graphs of objects from a function), and judging by the the thousands of lines of code I read, I’m not alone in this. The majority of times, scope is a good enough approximation for lifetime that RAII does the Right Thing.

    "using his pattern you should get feedback immediately when using your objects that you are doing something wrong. You get RAII semantics, and you don’t have to worry abuot finalizers and resurrection oddities. "

    I can’t find his pattern, so for all I know he may have said nothing more than "don’t do it because the semantics are complicated and annoying".

    "Note: I still stand by my original statement. In managed languages I spend a ridiculously smaller amount of time managed any resource that I do in C++. Now, the majority of that time is spent managing memory. however, any other resource also takes far less time to manage in managed code than in unmanaged code. "

    Are you actually writing /C++/, or just C with s/malloc/new/?

  12. Dr Pizza says:

    " Bugger, i need a circular reference, back to raw pointers and managing it myself"

    How often do you /actually/ need a circular reference?

    Seriously. What are you doing that uses them copiously? ’cause they don’t seem to be a big deal in your common-or-garden business apps. VB has for years had the circular reference "issue", yet the drooling Morts who use VB day in day out have managed just fine.

  13. Wilka says:

    Arhra:

    "if you’re using multiple different types of resources, you end up with nested using(){…} blocks"

    You don’t need to nest them in {}, you could just use something like:

    using (ResourceCleanup rcOne = new ResourceCleanup(resource))

    using (ResourceCleanup rcTwo = new ResourceCleanup(otherResource))

    {



    }

  14. arhra says:

    I can’t believe i never thought to try that.

  15. arhra: if you type:

    using (Foo f = …)

    using (Bar b = …)

    uzing (Baz z = …)

    I find you use just as much overhead as typing the

    FooCleaner f(…)

    BarCleaner b(…)

    BazCleaner z(…)

  16. Dr. Pizza: "How often do you /actually/ need a circular reference? "

    As I said before: "when I say "a lot" I really mean "a lot". I would estimate that with perhaps 75% or more of the system natural circularaties would emerge"

  17. Dr. Pizza: Again. The language/system is _forcing_ me to redesign away from the natural way to code things up into an unnatural form.

    We have a system here for representing the C# language. THe C# langauge is nothing but a graph of information.

    Methods have return types which refer to types which contain methods.

    Generic methods have type parameters which refer to constraints which can then refer to type parameters.

    The entire system is a graph. It is something which _necessarily_ will contain cycles. This is something I am forced to manage myself and is incredibly error prone.

  18. Dr. Pizza:

    "If I want to emulate the "always called unless you call exit()"-ness of C++’s destructors, I need finalizers. "

    That’s the way that JayBaz’ work differently. You much destruct them with Dispose, or else he considers you to have a bug. He also thinks that finalizers freeing resources is a crutch for sloppy programming.

    Instead, what he does is he has the resource only freed in the dispose method, and then he has the finalizer assert in debug if it’s ever called on a non-freed object. Then when you code and test your app you know immediately that you’re doing something wrong and that you should have a using statement.

  19. Dr. Pizza: "It does stop a double free; if you never write ‘free’ (or ‘delete’/’delete[]’) you can never have a double free (or double delete/delete[]). Destructors fire exactly _once_. Calling ‘delete’ should be /extremely/ unusual. "

    They should be. However, in an extremely heterogenous system where people are passing objects around left and right. It’s not uncommon for someone to mix up a smart-ptr with an auto-ptr. All it takes in one miscreant auto-ptr to delete something that someone else owns for these issues to occur.

  20. Dr. Pizza: "Are you actually writing /C++/"

    Very very very much writing C++.

  21. Dr. Pizza: The full pattern is this:

    public class MyClass : IDisposable

    {

    void IDisposable.Dispose() {

    //Actual cleanup

    DebugOnClose();

    }

    [Conditional("DEBUG")]

    void DebugOnClose() {

    GC.SuppressFinalize(this);

    }

    #if DEBUG

    ~MyEvent() {

    Debug.Fail("MyClasswas not properly disposed");

    }

    #endif

    }

    IN this manner you know immediately that you have used any disposable resource (like your GDI handles) properly.

    To be extremely safe, you don’t have to make this debug only. You can keep this code in Free builds as well. Now you don’t have to worry about misusing any resources. You also don’t have to both with resurrection issues or bizarro finalizers.

  22. Dr. Pizza: RAII works because on leaving scope the destructor of that cleaner class ensures that some bit of cleanup code is called (even in the presense of an exception). That cleanup might be a released ref count. A call to delete. Or anything else you might choose.

    This is the exact same semantics of a using with an IDisposable object.

    You said: "Aside from being correct (your Dispose method is faulty, as your next post concedes), being less typing, never leaving an object in an unusable state, and not requiring a new scope to be created for each new used type, you mean? That I can manage resources in an extensible, generic way is an added bonus. "

    a) THe dispose method I jsut listed is not faulty. Well, it’s only as faulty and an equivalent c++ destructor could be faulty.

    b) Less typing. Yup. You’re right on that.

    c) You don’t need a new scope to be created for each new used type.

    d) You can manage resourced in an extensible (generic?) way. That’s the benefit of IDisposable being an interface that you can implement any way you see fit. Release a refcount if you want. Close a transaction, whatever. You have complete control in the same was you have complete control in the destructor of the object you’d written in C++.

  23. Dr. Pizza: "I don’t often do the things that GC is good at (for example, returning arbitrary graphs of objects from a function), and judging by the the thousands of lines of code I read, I’m not alone in this"

    I do often do the things that GC is good at (for example, returning arbitrary graphs of objects from a function), and judging by the thousands of lines of code I read, I’m not alone in this 🙂

    Note: there is nothing in a managed langauge that stops you from being able to just return acyclic structures from your code. But there are serious drawbacks to trying anything more than that in unmanaged languages.

  24. DrPizza says:

    "Very very very much writing C++. "

    Then, uh, what’re you doing that’s stomping over memory you don’t own? I can’t believe you’re another one of these morons who’s too stupid to copy data from buffer A to buffer B properly (it truly blows my fucking mind whenever I see a file ending in .cpp that contains some idiot dicking about with strcat and all its other retarded friends and then getting it wrong. What kind of knucklehead does that shit? It’s utterly inexcusable).

    "a) THe dispose method I jsut listed is not faulty. Well, it’s only as faulty and an equivalent c++ destructor could be faulty. "

    Except the chance of a destructor being called more than once is about zero, and a destructor doesn’t have to be safely callable multiple times, so isn’t faulty. Dispose does, yours doesn’t, so it’s faulty.

    "You can manage resourced in an extensible (generic?) way. That’s the benefit of IDisposable being an interface that you can implement any way you see fit. "

    I think you just missed the point of genericity. I don’t want to have to write a whole new class with a whole new IDisposable implementation just to satisfy some ad hoc need.

    "I do often do the things that GC is good at (for example, returning arbitrary graphs of objects from a function), and judging by the thousands of lines of code I read, I’m not alone in this 🙂 "

    Highly unlikely. There is little need for typical business apps (== most apps on the planet) to deal with such things at all regularly. Mort just doesn’t do that kind of thing. Just as well, since he uses VB.

    "They should be. However, in an extremely heterogenous system where people are passing objects around left and right. It’s not uncommon for someone to mix up a smart-ptr with an auto-ptr. All it takes in one miscreant auto-ptr to delete something that someone else owns for these issues to occur. "

    Dunno about you, but _my_ smart pointers are immiscible (or safe to mix), so what mixing is there?

    "That’s the way that JayBaz’ work differently. You much destruct them with Dispose, or else he considers you to have a bug. He also thinks that finalizers freeing resources is a crutch for sloppy programming. "

    If finalizers are sloppy, what the hell is GC?

  25. Dr. Pizza: "Then, uh, what’re you doing that’s stomping over memory you don’t own? "

    Simple: I’m making a mistake. I’m human. It’s all too easy to accidentally do it in C++. I’m rarely even using raw buffers. Although currently i’m using them because the previous creators of said code used them, and I’m slowly trying to move it over to code that is a littel bit safer than before. Again, it is all too easy to accidentally screw up something when you have complete and totall access to all memory in your process, and all it takes is one mistake to screw it all up.

    Note: you said: "What kind of knucklehead does that shit? It’s utterly inexcusable" Well, teh knucklehead in this case is an unmanaged language allowing this to happen 🙂

    If you truly believe that this sort of behavior is inexcusable (i.e. not excusable), then I’m having trouble understanding why you’re against the system preventing it from happening in the first place 🙂

    "I think you just missed the point of genericity. I don’t want to have to write a whole new class with a whole new IDisposable implementation just to satisfy some ad hoc need. "

    You don’t have to do that either in a managed language. You can have the _equivalent_ of your c++ destructor style RAII in a managed language in a _generic_ way by using classes that implement disposable. If you wanted a new style of cleanup in C++ you’d need to implement a new RAII object with a destrutor that did your style of cleanup. Similarly you could do the same with an IDIspoable object. OR am I still missing something. Can you give me an example of how genericity helps you here where the same thing can’t be done with disposable things?

    "Highly unlikely. There is little need for typical business apps (== most apps on the planet) to deal with such things at all regularly. Mort just doesn’t do that kind of thing. Just as well, since he uses VB. "

    This isn’t a discussion about mort. Mort certainly doesn’t need to think about resources at all. He just wants to get the job done. He wouldn’t care if there was a handle leak, or if there was a memory leak. He would only care if it ended up causing his program to crash. He would then learn what he needed to fix it, curse, and then move on. Neither RAII or Disposable objects will help in either of these situations. I also disgree that this doesn’t come up. One of the most common things for a mort to be coding up is a WinForm app that does something. Chances are that he has a component that contains another component which then has a reference back to its containing parent (I’ve seen that in most WinForms code I’ve looked at). Right there is a simple circular reference. This happens because from the outer container Mort can write code to talk to the inner container. But then once he’s in the inner container he realizes he needs information from the outer one. What’s the easiest way to do that? Just have each contain a reference to eachother.

    "Dunno about you, but _my_ smart pointers are immiscible (or safe to mix), so what mixing is there? "

    I’m interested in this. How do your smart pointers which addref/release mix with autoptrs which just delete?

    As I said, finalizers are a sloppy way to release disposable resources. By marking something as dispoable you are declaring that correct usage of this object involves "disposing" of it when you’re done with it. This is because the object holds onto some precious resource that the system is unable to manage without explicit interaction from the programmer. Much in same way that memory used to need explicit management. So, best practise handling of that resource involves insuring (but not requiring) that you release the unmanaged resources when you are done with them. Jay’s pattern insures that you get feedback immediately when you are not doing this. I.e. when you are depending on the garbage collector to return resources that it is not responsible for managing back to the system you then find out.

    In the future I see it being completely reasonable and likely for the GC’s realm of responsibility to shift from being all about memory to being able to handle other types of resources (like handles). In that system you can start using disposable objects less and less while the GC takes care of more and more for you.

    Until then, when using non-managed resources, the pattern makes sure you follow best practises and returns people to the same state they were in C++ when they had to pay attention to and manage all resources.

    Note: I see your point about destructors and call-once. However, there is no gaurantee that a destructor is only called once in C++ (please correct me if I’m wrong here). So it’s possible (again through accident) to have issues crop up because of that. Similarly, it’s possible to have multi-dispose calls crop up. If you have a resource that has issues with that (again, I don’t know of any), then it’s pretty trivial to have a flag to prevent that from happening. However, that is neither here nor there. That complements Jay’s pattern, it doesn’t preclude it.

  26. Dr. Pizza: Oh and " I can’t believe you’re another one of these morons who’s too stupid to copy data from buffer A to buffer B properly"

    Yup. I’m one of those morons. I’ve definately screwed things up more than once in my lifetime 🙂

    Interestingly enough, i’m not sure if I know anyone who hasn’t. Certainly some of the best developers working on the best software out there have been guilty of this without question.

    I don’t go otu there playing fast and reckless. I’m very conservative and I try to be safe all the time. However, as it happens, you can mess up.

    For example. I had a data structure that I was putting objects into. At the end of the day I went through and passed through the data structure releasing all the references I had to it. Of course, what i didn’t realize at the time (Because i was thinking about other things) was that the data structure itself would release the objects when it got cleaned up. So I ended up _accidentally_ double releasing objects. Of course this mean way down the line they got freed a little too early when someone else still held onto it. They then tried to use it and had a chance that it might/might not work out. Finding that bug was very unfun and took an incredible amount of time to discover (mostly because things worked pretty well and only some of the time did a problem actually happen).

    This was a stupid mistake on my part. At the time I’d thought that this was a location where a circular reference could pop up so I thought it was necessary for me to handle this. As it happened i could prevent the circularity in a different location. I did that, and then forgot about what I was doing there.

    This is case where there is an emormous burden on the programmer to make sure he’s doing everything right. Not only that, but there are devastating consequences when you don’t.

    It also means that I have to understand certain parts of the the system as a whole and then realize when I make a change in one location that it involves changes in others to ensure correctness. Note: the part of the system that I has to care about here (memory managment and lifetime) is something 100% unrelated to the work I do. Nothing about my work deals with memory managment. It deals with C# and VS. Being forced to think and manage other things jsut ends up distracting me from the work I want to be doing.

  27. D. Brian Ellis says:

    Quick comment (though this conversation is pretty much over I see). I work in telemetry and most of my applications are extremely focused. One specific task, maybe 3-20 users, and extremely important. I have become a die-hard C# fan and I wish I could change over completely. However, anytime I have to do interaction with our real-time systems it just isn’t possible. We switched from C# to C++ on one app and cut CPU usage by 60%. This app was a great example because it made use of COM, API calls, .dll function libraries, etc. I went to C# class with programmers from NASA, GM, etc. and I’ve talked to our vendors as well. What it comes down to is this is one area that will remain C++ almost entirely. While it might be possible to get the same or even better performance out of C# in most situations (it is), in real-time programming the thing that gives you performance is CONTROL. Control over garbage collection, threading, exact resources and memory. Things that C# can’t give you the final say on.

    Brian

  28. D. Brian: I’d love to hear more on this.

    For example, how does C++ give you the control you need over:

    a) GC

    b) Threading

    c) exact resources

    d) memory

    Have you looked into the new CLR hosting apis? Would they provide the control necessary to alleviate these issues? I know the SQL Server team used them to get the necessary perf and control to embed the CLR in SQL Server 2005.

  29. DrPizza says:

    "If you truly believe that this sort of behavior is inexcusable (i.e. not excusable), then I’m having trouble understanding why you’re against the system preventing it from happening in the first place 🙂 "

    Because the systems that prevent it have all sorts of problems of their own.

    "You don’t have to do that either in a managed language. You can have the _equivalent_ of your c++ destructor style RAII in a managed language in a _generic_ way by using classes that implement disposable."

    Oh? I can get something as simple to use as ON_BLOCK_EXIT to allow ad hoc fabrication of RAII objects?

    "If you wanted a new style of cleanup in C++ you’d need to implement a new RAII object with a destrutor that did your style of cleanup."

    Yeah, see, that’s where you’re wrong. I wouldn’t. ScopeGuard makes sure of that.

    "This isn’t a discussion about mort."

    No, it’s a discussion of how problematic reference counting is in general. Answer: it isn’t.

    "I’m interested in this. How do your smart pointers which addref/release mix with autoptrs which just delete? "

    By not mixing at all?

    "As I said, finalizers are a sloppy way to release disposable resources. By marking something as dispoable you are declaring that correct usage of this object involves "disposing" of it when you’re done with it. This is because the object holds onto some precious resource that the system is unable to manage without explicit interaction from the programmer."

    But you’re not saying that. You’re saying that the use of the class /can/ force tidying up to occur prior to (unpredictable) garbage collection if they determine they need it. You’re not saying they /must/.

    "Much in same way that memory used to need explicit management. So, best practise handling of that resource involves insuring (but not requiring) that you release the unmanaged resources when you are done with them. Jay’s pattern insures that you get feedback immediately when you are not doing this. I.e. when you are depending on the garbage collector to return resources that it is not responsible for managing back to the system you then find out."

    Finalizers aren’t effective at ensuring that, because they’re unfortunately not guaranteed to run.

    "In the future I see it being completely reasonable and likely for the GC’s realm of responsibility to shift from being all about memory to being able to handle other types of resources (like handles). In that system you can start using disposable objects less and less while the GC takes care of more and more for you. "

    The GC can’t do that, because it doesn’t know enough. All the GC knows about is memory pressure. It doesn’t know when database connections are about to run out, or when GDI handles are about to run out, or anything else. I think that .NET 2.0 will have a hack to associate a kind of "memory pressure" value to an object, but that’s not a complete solution.

    "Until then, when using non-managed resources, the pattern makes sure you follow best practises and returns people to the same state they were in C++ when they had to pay attention to and manage all resources. "

    But in C++ I /don’t/ have to pay attention and manage /all/ resources. I don’t do memory management; I let classes which I didn’t write deal with it for me. I don’t manage file handles; I let classes which I didn’t write deal with it for me. I only have to manage resources when those resources have unusual lifetimes.

    "Note: I see your point about destructors and call-once. However, there is no gaurantee that a destructor is only called once in C++ (please correct me if I’m wrong here)."

    It would be a very surprising thing indeed if a destructor were called more than once (I daresay that most people wouldn’t even know how to call a destructor), and would be venturing into the world of undefined behaviour anyway (i.e. it’s not /permitted/ to call a destructor more than once). This sets them apart from Dispose which /must/ be safe to call more than once.

    "So it’s possible (again through accident) to have issues crop up because of that."

    I would be genuinely surprised if anyone has /ever/ accidentally called a destructor more than once.

    "Similarly, it’s possible to have multi-dispose calls crop up. If you have a resource that has issues with that (again, I don’t know of any), then it’s pretty trivial to have a flag to prevent that from happening."

    CloseHandle on closed handles, for example, can (in some situations) cause a problem.

    "However, that is neither here nor there. That complements Jay’s pattern, it doesn’t preclude it. "

    It is here and there, because it’s something that Dispose must take care of that destructors do not have to bother with.

    "Interestingly enough, i’m not sure if I know anyone who hasn’t. Certainly some of the best developers working on the best software out there have been guilty of this without question. "

    I know I haven’t.

    "For example. I had a data structure that I was putting objects into. At the end of the day I went through and passed through the data structure releasing all the references I had to it. Of course, what i didn’t realize at the time (Because i was thinking about other things) was that the data structure itself would release the objects when it got cleaned up."

    <Cheryl>Why would you do that?</Cheryl>

    How could such a confusion even enter into what you were doing?

    "So I ended up _accidentally_ double releasing objects. Of course this mean way down the line they got freed a little too early when someone else still held onto it. They then tried to use it and had a chance that it might/might not work out. Finding that bug was very unfun and took an incredible amount of time to discover (mostly because things worked pretty well and only some of the time did a problem actually happen). "

    crtdbg?

    "It also means that I have to understand certain parts of the the system as a whole and then realize when I make a change in one location that it involves changes in others to ensure correctness. Note: the part of the system that I has to care about here (memory managment and lifetime) is something 100% unrelated to the work I do. Nothing about my work deals with memory managment. It deals with C# and VS. Being forced to think and manage other things jsut ends up distracting me from the work I want to be doing."

    The curious thing being nothing about my work is resource management, yet with C++ I still don’t have to worry about it, and I can get it right more often than the GC gets it right.

  30. Dr Pizza: I’m going to address some of these points now, and some later. Preferebly in small chunks as it’s pretty difficult ot read the above enormity.

    Specfically on your questions: "Oh? I can get something as simple to use as ON_BLOCK_EXIT to allow ad hoc fabrication of RAII objects?



    Yeah, see, that’s where you’re wrong. I wouldn’t. ScopeGuard makes sure of that."

    You can get ScopeGuard style cleanup in C#, again using IDisposable and delegates. You simply have a generic idisposable object which takes in a delegate and invokes it on dispose if the disposable object hasn’t been committed.

    Wow, that’s quite a mouthful. I’ll explain.

    you could have:

    class Cleanup1<T> : IDisposable {

    Action<T> action;

    T argument;

    bool committed;

    public Cleanup1(Action<T> action, T argument) { //assign variables }

    public void Commit() { this.committed = true; }

    void IDisposable.Dispose() {

    if (commiitted == false) action(arg);

    }

    }

    you would then use it like this:

    Resource r = GetResourceSomeHow();

    using(Cleanup1<Resource> c = new Cleanup1(ReleaseResource, r))

    //You can now call: c.Commit() whenever you want.

    Note: If you don’t even want to call commit and always want the guard to execute when you leave scope, then just do:

    HANDLE handle = GetHandleSomeHow();

    using(new Cleanup1(CloseHandle, handle))

  31. Dr Pizza: "No, it’s a discussion of how problematic reference counting is in general. Answer: it isn’t."

    I would love for it to not be a problem. And I am open to learning ways to make that the case. Could you tell me how I can easily deal with circular references in a ref counting world so that I don’t end up not releasing references that aren’t being used anymore?

  32. Dr Pizza: "I know I haven’t. "

    I have absolutely no idea of your capabilities or what you’ve worked on 🙂

  33. Dr. Pizza: "The curious thing being nothing about my work is resource management, yet with C++ I still don’t have to worry about it, and I can get it right more often than the GC gets it right. "

    I’m still very curious about this. how do the systems you use deal with circular references? With C++ i do have to worry about this and I do get it wrong.

    I’m honestly looking for guidance here on the right way to do things. In the managed world I’ve just never had this problem. Not once 🙂

    It’s allowed me to express my systems in a very clear and consise manner. In the C++ world, it’s forced design changes that work to try to prevent these things from happening by shifting the burden onto the programmer to manage complex lifetime issues.

    I can come up with more examples if you want, however, it’s been the case that many parts of the system naturaly descend into graph (i.e. cyclical) structures. Even better, many algorithms that I’ve implemented from standard books in this field describe solutions through the manipulation of graph structures.

    So far I have had a horrendous time doing this in C++, but no problems whatsoever in OCaml or C#.

  34. Dr. Pizza:

    "In the future I see it being completely reasonable and likely for the GC’s realm of responsibility to shift from being all about memory to being able to handle other types of resources (like handles). In that system you can start using disposable objects less and less while the GC takes care of more and more for you. "

    The GC can’t do that, because it doesn’t know enough. All the GC knows about is memory pressure. It doesn’t know when database connections are about to run out, or when GDI handles are about to run out, or anything else. I think that .NET 2.0 will have a hack to associate a kind of "memory pressure" value to an object, but that’s not a complete solution.

    That’s why I said "in the future". Memory is simply a single form of resources that right now the GC knows how to handle. It can understand acquisition, use and releasing of that specific resource. I don’t see why the same couldn’t be said of any other resource as well.

  35. Dr. Pizza: "Finalizers aren’t effective at ensuring that, because they’re unfortunately not guaranteed to run."

    This is true, but the only case where this would happen would be if the process can terminated in a non cooperative manner. i.e. through calls that by pass the normal CLR shutdown logic. I only know about TerminateProcess being able to do this, but there are probably a couple more ways. In any event, these finalizer dont’ run only when then CLR is coming crashing down anyways which means that your resources would be released anyways.

    I know this is not an sign that all is rosy with the world.

    I’m curious, however, what guarantees do you have that a destructor will complete if Terminateprocess is called?

    I guess my point is that for normal operation of code, this pattern _will_ reveal mistakes. And only in abnormal cases will it not. But in abnormal cases you would have issues cleaning up resources anyways (at least I think you would) so I’m not sure how it’s any different. Or how RAII would be really better in that case.

  36. Dr. Pizza: "But you’re not saying that. You’re saying that the use of the class /can/ force tidying up to occur prior to (unpredictable) garbage collection if they determine they need it. You’re not saying they /must/. "

    Yes I am saying that they "must". Why? Because in the pattern that Jay haz above, the finalizer does _not_ free the resource.

    So… if the use of the class does not foce the the tidying up of the resource, then you will have a resource leak that the garbage collector will not alleviate. i.e. it will _not_ tidy it up. Not only that, but you’ll get an error in debug (or free if you choose) that you misused an object and that you should have disposed of it.

    Again: They _must_ tidy up manually. If they don’t they will get this debug failure (or, alternatively you could throw a "YouDidn’tFreeThisResourceException). There is no "can" here.

  37. Ivo says:

    "Note: you can still download the PSDK and have it be 100% supported in the C++ express sku."

    I installed the VC++ Express Beta, but it turns out it doesn’t have a resource editor (dialogs, string maps, etc). Would that be fixed if I install the PSDK? If not, the PSDK will not be enough for creating Win32 applications…

  38. Hrm.. Excellent question IVO. I will check on that tomorrow at work!

  39. DrPizza says:

    "You can get ScopeGuard style cleanup in C#, again using IDisposable and delegates. You simply have a generic idisposable object which takes in a delegate and invokes it on dispose if the disposable object hasn’t been committed.



    HANDLE handle = GetHandleSomeHow();

    using(new Cleanup1(CloseHandle, handle))

    what is "CloseHandle" here?

    ============

    "I’m still very curious about this. how do the systems you use deal with circular references? With C++ i do have to worry about this and I do get it wrong. "

    Circular references are sufficiently rare that I’ve not formulated a standard "response" to them. DAGs of various sorts, yes, but cyclic graphs? No; I just don’t do it that often.

    Pretending for a moment that I did, a few things spring to mind. I would be disinclined to create such structures of scarce resources (which seems reasonable anyway. Most scarce resources probably shouldn’t be dumped into a data structure for long periods of time), which would free me to use a garbage collector for those circular structures (as I’d no longer care much about them). I _may_ us a weak pointer of some kind to prevent cycles, but it’s probably not worth the effort; a drop-in GC for the cyclic structures is easier.

    "I’m honestly looking for guidance here on the right way to do things. In the managed world I’ve just never had this problem. Not once 🙂 "

    See, the things I do deal with genuinely scarce resources (which memory ain’t) often (files and database connections being the real biggies) and cycles never. So I’ve had considerably fewer problems in C++.

    "That’s why I said "in the future". Memory is simply a single form of resources that right now the GC knows how to handle. It can understand acquisition, use and releasing of that specific resource. I don’t see why the same couldn’t be said of any other resource as well. "

    One problem is that there’s no real way of finding out if there’s (e.g.) handle pressure or GDI object pressure or DB connection pressure in the system. Memory pressure, on the other hand, is very easy to detect. I just don’t _know_ how many handles I can open, and nor does the OS, so the GC has no real way of knowing the urgency with which it should collect. And there are no obvious triggers; if a handle creation fails there could be dozens of reasons (not just "out of memory"), which again the GC has no real means of responding to.

    ============

    "Yes I am saying that they "must". Why? Because in the pattern that Jay haz above, the finalizer does _not_ free the resource. "

    Then you’re violating the interface and the LSP, which is hardly good OO practice. If you’re writing classes that way, they’re _faulty_.

  40. Dr. Pizza: "Then you’re violating the interface and the LSP, which is hardly good OO practice. If you’re writing classes that way, they’re _faulty_. "

    How so? There is nothing in an interface that can state that a method must be called, or that methods must be called in a certain order, however, it is the case that those constraints do occur.

    No violation is occurring here. For example, If i try to read from a closed stream then I’ll get an exception. Does that violate the interface of the stream? Maybe… but I’m not sure how you’d do it differently.

    LSP = LeastSurprisePrinciple?

    Note: I think managing resources at all violates the LSP, whether with this methodoligy or with RAII. The least suprise for me would just be to know that the system was taking care of it for me. Note: that is true even in C++. The fact that I need to wrap a resource in an RAII wrapper in order to let the system reclaim it already means that I was suprised at one point to find out that it was my responsibility. However, once i learned that I took the steps necessary to make sure I’d be safe in the future.

    The same learning applies here. What’s nice is that with this pattern you are told that you’re doign something wrong. You’ll get an error telling you what happened and what you should have done to be a good citizen and prevent it.

    I’ve never seen the equivalent warning style in the C++ world. When acquiring a handle (or a reg key or whatever), the system never tells me that I didn’t it cleanup properly. This means that I will ahve a bug that will sit around unnoticed, potentially causing a problem after running for quite a while (when I suddenly realize that my system needs handles bad).

  41. HANDLE handle = GetHandleSomeHow();

    using(new Cleanup1(CloseHandle, handle))

    what is "CloseHandle" here?

    CloseHandle is a function of the form: void CloseHandle(HANDLE h)

    In this hypothetical situation it’s how you’d release a handle back to the system.

    Cleanup1 takes a delgate and an argument, and will invoke that delegate on it’s argument when cleanup is disposed.

  42. Dr. Pizza: "Circular references are sufficiently rare that I’ve not formulated a standard "response" to them. DAGs of various sorts, yes, but cyclic graphs? No; I just don’t do it that often. "

    Then i’m not sure how I can accept your position as being helpful or a cosntructive critique of mine. My position is that these situations do occur (quite regularly) with the code I have, and therefor having a managed system is an incredible boon.

    Considering that there is no downside (as I can get the same RAII semantics in C#, with the benefit of explicit warnings when I’m doing something wrong), I see the managed world providing me the right environment to be most productive.

  43. Dr.Pizza: "One problem is that there’s no real way of finding out if there’s (e.g.) handle pressure or GDI object pressure or DB connection pressure in the system. Memory pressure, on the other hand, is very easy to detect. I just don’t _know_ how many handles I can open, and nor does the OS, so the GC has no real way of knowing the urgency with which it should collect. And there are no obvious triggers; if a handle creation fails there could be dozens of reasons (not just "out of memory"), which again the GC has no real means of responding to. "

    I didn’t realize that. I thought that it should be possible to do that, but I just don’t know enough.

    I would think the same algorithms and feedback mechanisms that make memory GC work would be easily transferable to other domains. I can see it working with things like DB connections, but I’ll defer to you if you know more about this.

  44. DrPizza says:

    LSP is Liskov Substitutability Principle. The IDisposable says "you can force early tidy up with my Dispose method, but I’ll clean up anyway". The broken finalizer approach breaks that; it changes the contract of IDisposable to "you *must* force cleanup with my Dispose method".

    =================

    "I would think the same algorithms and feedback mechanisms that make memory GC work would be easily transferable to other domains. I can see it working with things like DB connections, but I’ll defer to you if you know more about this. "

    It depends a bit on the resource. If it’s a database, for example, you can probably find out how many pooled connections you have, how many are unused, and so on. You could then make the connection pool force a garbage collection if it had no spare connections (and only if that failed to make a connection available would it begin the laborious process of building a new connection).

    But with something like a GDI handle, you don’t really have that ability. The upper limit on the number of handles seems pretty arbitrary (I think it’s limited to a few thousand, but that may be per WindowStation or even per Desktop, I don’t know), and there’s no simple call you can make to find out how many handles there are currently open or how many are available. I don’t know if GDI even keeps track of such statistics; I think it just blindly tries to give out handles until it runs out, and then it just starts failing calls. Further, there’s no place to provide a "hook" from GDI to the GC (i.e. there’s nowhere for GDI to say "OK, I couldn’t give you a handle. I’ll make the GC run to see if I can free some up from somewhere. If that works I’ll give you a handle, if it doesn’t, I’ll throw/give you null/etc."). The same is true for HANDLEs and, I think, SOCKETs.

    The RAII approach with appropriate scoping means that all these objects have a minimal lifetime, which, in the absence of more detailed information, is the safest thing to do.

  45. Dr. Pizza: "you can force early tidy up with my Dispose method, but I’ll clean up anyway". The broken finalizer approach breaks that; it changes the contract of IDisposable to "you *must* force cleanup with my Dispose method".

    Not necessarily. It is very simple to modify the code above to make disposing work with GC. However, I would recommend still leaving in the assert so that you get the feedback necessary to realize that you’re not cleaning up something an appropriate manner.

    This gives both benefits. If one does not dispose of a resource properly, it will still get disposed, however, in a debug system you’ll get a nice warning showing you that you’re doing something wrong and you’ll know right then that you forgot to use RAII on that object.

    =========

    "The RAII approach with appropriate scoping means that all these objects have a minimal lifetime, which, <em>in the absence of more detailed information</em>, is the safest thing to do. "

    Correct. I realize that now there might not be such hooks, however, I do not see why it would not be possible in the future to supply that information.

    And, if it wasn’t possible then this would just be one of those resources that you still were forced to handle manually. However, for any resources for which you could map the concepts necessary to do GC on top of, then I this this idea would be quite useful.

    Of course, given taht our IDsposable implementations tell you when you’re mis-using resources and that using RAII on them works just as well I can see this not being a necesity. There’s not much gain to the user and there’s little chance a user will screw it up since we will alert them to the fact that they should have been using RAII. So the chances of any any improper resources handling of non-cyclical types happening with the current situation is probably pretty low and time would be better spent on other features. 🙂