COM object destructors are very sensitive functions


If you try to do too much, you can find yourself in trouble.

For example, if your destructor hands a reference to itself to other functions, those functions might decide to call your IUnknown::AddRef and IUnknown::Release methods as part of their internal operations. Consider:

ULONG MyObject::Release()
{
 LONG cRef = InterlockedDecrement(&m_cRef);
 if (cRef == 0) {
  delete this;
 }
 return cRef;
}

MyObject::~MyObject()
{
 if (m_fNeedSave) Save();
}

That doesn't look so scary now does it? The object saves itself when destructed.

However, the Save method might do something like this:

HRESULT MyObject::Save()
{
 CComPtr<IStream> spstm;
 HRESULT hr = GetSaveStream(&spstm);
 if (SUCCEEDED(hr)) {
  CComQIPtr<IObjectWithSite, &IID_IObjectWithSite> spows(spstm);
  if (spows) spows->SetSite(this);
  hr = SaveToStream(spstm);
  if (spows) spows->SetSite(NULL);
 }
 return hr;
}

On its own, this looks pretty normal. Get a stream and save to it, setting ourselves as its site in case the stream wants to get additional information about the object as part of the saving process.

But in conjunction with the fact that we call it from our destructor, we have a recipe for disaster. Watch what happens when the last reference is released.

  • The Release() method decrements the reference count to zero and performs a delete this.

  • The destructor attempts to save the object.
  • The Save() method obtains the save stream and sets itself as the site. This increments the reference count from zero to one.

  • The SaveToStream() method saves the object.
  • The Save() method clears the site on the stream. This decrements the reference count from one back to zero.

  • The Release() method therefore attempts to destructor the object a second time.

Destructing the object a second time tends to result in widespread mayhem. If you're lucky, you'll crash inside the recursive destruction and identify the source, but if you're not lucky, the resulting heap corruption won't go detected for quite some time, at which point you'll just be left scratching your head.

Therefore, at a minimum, you should assert in your AddRef() method that you aren't incrementing the reference count from zero.

ULONG MyObject::AddRef()
{
 assert(m_cRef != 0);
 return InterlockedIncrement(&m_cRef);
}

This would catch the "case of the mysteriously double-destructed object" much earlier in the game, giving you a fighting chance of identifying the problem. But once you've isolated the problem, what can you do about it? We'll look into that next time.

Comments (20)
  1. Frederik Slijkerman says:

    It looks like you should bump the reference count in the destructor before the Save() call.

    Something like:

    MyObject::~MyObject()

    {

    assert(m_cRef == 0);

    m_cRef = 1; // Set refcount back to valid state for other function calls.

    if (m_fNeedSave) Save();

    }

  2. Chris Becke says:

    You can’t do the cleanup in the destructor as you cannot guarantee that the external object – now that it has a reference – won’t hold onto it for a while – and that delete this is going to continue cleaning up your object regardless of any (now) oustanding references.

  3. Centaur says:

    C++ object destructors are very sensitive functions.

    According to the Holy Standard, Chapter 3.8, Verse 1, after the destructor has started running, the object is officially dead. Any and all operations on it as a whole are considered necrophily and may be punishable by Undefined Behavior.

  4. Andy Walldorff says:

    I don’t think the change to AddRef would work. When the object is first constructed, the reference count should be 0 and AddRef should be called at some point (probably via QueryInterface) to increment the reference count. If this is not done, then the reference will never go to 0 and the object is not deleted. I guess that is one reason why ATL destructors are not virtual and the FinalRelease method is provided.

  5. Paul C. says:

    Wow, that is pretty freakish. The best solution is probably to avoid doing anything other than memory deallocation in your destructor, and make sure it only gets called once. To do that, in Release() if the reference count has gone to zero, set a flag that says you’re in cleanup. If that flag is set, don’t initiate cleanup again. So, you’d end up with a Release() that looks something like this:

    ULONG MyObject::Release()

    {

    LONG cRef, cCleanup;

    cRef = InterlockedDecrement(&m_cRef);

    if (cRef == 0) {

    // Constructor sets cCleanup to zero

    cCleanup = InterlockedIncrement(&m_cCleanup);

    if (cCleanup == 1) {

    if (m_fNeedSave) Save();

    delete this;

    }

    }

    return cRef;

    }

    Now, we can be certain that we only deallocate ourselves once.

  6. memet says:

    My guess would be to create an internal (COM) object to pass to the save method – if we must call the save method.

    Somehow serialize our data, pass it to this temporary object and then call the SaveToStream on that object.

  7. Lewis Jones says:

    I ran across this with a COM-like environment (Reference counted objects, interface-based accessor, just without MS’s overhead… It was for a set of all inproc DLLs so marshalling wasn’t necessary, also we were trying for cross-platform).

    Anyways, we had a strange sequence of strong references that caused the situation above. Our solution was in our implementation of the Release() method to set the reference count to something out-of-range (like -100) immediately prior to calling "delete this".

    This way any AddRef()/Release() would not have any effect (we only cared if the reference count == 0).

    Not pretty, but it worked…

  8. X says:

    Calling functions that could fail from a destructor is a bad idea, not only in COM. A Save() function falls in this category.

  9. Moasat says:

    I think I would just find a better way to save the object rather than doing it in the destructor. Especially if the code required to save it involved Addref’ing the object being deleted. If the SetSite didn’t require an interface to ‘this’, then there’d be no problem here. I think, regardless of any type of refcnt hacks, handing out an interface pointer to ‘this’ from the destructor is a Very Bad Thing.

  10. rburhum says:

    When the object is first constructed, the reference count should be 0 and AddRef should be called at some point (probably via QueryInterface) to increment the reference count.

    Although I agree with you that this wouldn’t work, I wanted to point out that I have seen implementations of an Unknown object that start with a reference count of 1 – this is not too uncommon.

    Another solution would be to model the save behavior with finer grained granularity; an explicit call to Save() before you reach the destructor.

    Either way… do I sense a series of articles on COM? I can’t wait!

  11. rburhum says:

    If the SetSite didn’t require an interface to ‘this’, then there’d be no problem here. I think, regardless of any type of refcnt hacks, handing out an interface pointer to ‘this’ from the destructor is a Very Bad Thing.

    Handing out pointers to ‘this’ is not necessarily a "bad thing" or hack. In fact, this is very common with event sink objects and such kind of COM objects.

  12. Moasat says:

    Handing out pointers to ‘this’ is not necessarily a "bad thing" or hack. In fact, this is very common with event sink objects and such kind of COM objects.

    I was referring to the idea of handing it out from within the destructor and the ‘hacks’ of playing with the refcnt so that the destructor is not called twice. While it might not directly be a bad thing, it could lead to bad things and is not very nice for any other developers to have to maintain.

    If I was working on a ‘SetSite’-type function, I certainly wouldn’t want to have to deal with the fact that the interface I’m getting may point to an object that is being destroyed.

    Alternatively, if I was working on the object being destroyed, I would question code that looked like this:

    if (refcnt < 0) …

    or

    if (refcnt == -100) …

  13. waleri says:

    ULONG CMyObject::Release(void)

    {

    assert(0 != m_cRef); // Somebody calling us twice?

    if (1 == m_cRef) // Time to save, we’re about to be destroyed

    Save();

    // Save() could increase the counter

    LONG cRef = InterlockedDecrement(&m_cRef);

    if (0 == cRef)

    delete this;

    return cRef;

    }

  14. ipoverscsi says:

    I’ve done some research into C++ destructors (on my own geek time), and using AddRef() in the destructor is just bad business.

    Classes with virtual methods use a vtable to determine which methods to call at runtime. In most C++ implementations the vtable is actually changed as the destructor deletes from most derived to base class; this is to ensure that base classes calling virtual methods to not access methods or fields of derived classes that have already been destroyed. This means that even you somehow manage to prevent the memory from being reclaimed by the memory manager, virtual functions will only call the root classes methods.

  15. Joe Beda says:

    My favorite is a component that pushed a message loop during the final release. The component in question will remain nameless…

  16. abc says:

    All the discussions do prove that "COM object destructors are very sensitive functions".

    At least it’s as same sensitve as ctr and dtr.

  17. autist0r says:

    What about NOT saving in the destructor? I find it cleaner and more flexible to have an explicit call to save rather than an automatic save. Ok, sometimes it’s not your choice…

Comments are closed.