On objects with a reference count of zero


One commenter claimed that

When the object is first constructed, the reference count should be 0 and AddRef should be called at some point (probably via QueryInterface) to increment the reference count.

If you construct your object with a reference count of zero, you are playing with matches. For starters, when the object is created, there reference count is not zero - the person who created the object has a reference! Remember the COM rule for references: If a function produces a reference (typically an interface pointer), the reference count is incremented to account for the produced reference. If you consider the constructor to be a function, then it needs to return with an incremented reference count to account for the produced object.

If you prefer to play with matches, you can end up burning yourself with code like the following:

// A static creator method
HRESULT MyObject::Create(REFIID riid, void **ppvObj)
{
 *ppvObj = NULL;
 MyObject *pobj = new MyObject();
 HRESULT hr = pobj ? S_OK : E_OUTOFMEMORY;
 if (SUCCEEDED(hr)) {
  hr = pobj->Initialize(); // dangerous!
  if (SUCCEEDED(hr)) {
   hr = pobj->QueryInterface(riid, ppvObj);
  }
  if (FAILED(hr)) {
   delete pobj;
  }
 }
 return hr;
}

Notice that you're initializing the object while its reference count is zero. This puts you in the same "limbo zone" as cleaning up an object while its reference count is zero, and therefore exposes you to the same problems:

HRESULT MyObject::Load()
{
 CComPtr<IStream> spstm;
 HRESULT hr = GetLoadStream(&spstm);
 if (SUCCEEDED(hr)) {
  CComQIPtr<IObjectWithSite, &IID_IObjectWithSite> spows(spstm);
  if (spows) spows->SetSite(this);
  hr = LoadFromStream(spstm);
  if (spows) spows->SetSite(NULL);
 }
 return hr;
}

HRESULT MyObject::Initialize()
{
 return Load();
}

An object that saves itself during destruction is very likely to load itself during creation. And you run into exactly the same problem. The call to IObjectWithSite::SetSite(this) increments the reference count of the object from zero to one, and the call to The call to IObjectWithSite::SetSite(NULL) decrements it back to zero. When the reference count decrements to zero, this destroys the object, resulting in the object being inadvertently destroyed by the MyObject::Load() method.

The MyObject::Create static method doesn't realize that this has happened and proceeds to call the QueryInterface method to return a pointer back to the caller, expecting it to increment the reference count from zero to one. Unfortunately, it's doing this to an object that has already been destroyed.

That's what happens when you play with an object whose reference count is zero: It can disappear the moment you relinquish control. Objects should be created with a reference count of one, not zero.

ATL prefers to play with matches, using the moral equivalent of the above MyObject::Create function in its object construction:

void InternalFinalConstructAddRef() {}
void InternalFinalConstructRelease()
{
    ATLASSERT(m_dwRef == 0);
}

static HRESULT WINAPI CreateInstance(void* pv, REFIID riid, LPVOID* ppv)
{
    ATLASSERT(*ppv == NULL);
    HRESULT hRes = E_OUTOFMEMORY;
    T1* p = NULL;
    ATLTRY(p = new T1(pv))
    if (p != NULL)
    {
	p->SetVoid(pv);
	p->InternalFinalConstructAddRef();
	hRes = p->FinalConstruct();
	p->InternalFinalConstructRelease();
	if (hRes == S_OK)
	    hRes = p->QueryInterface(riid, ppv);
	if (hRes != S_OK)
	    delete p;
    }
    return hRes;
}

ATL hands you a set of matches by calling your FinalConstruct method with a reference count of zero. If you know that you're going to get burned, you can use the DECLARE_PROTECT_FINAL_CONSTRUCT macro to change the InternalFinalConstructAddRef and InternalFinalConstructRelease methods to versions that actually increment the reference count temporarily during the call to FinalConstruct, then drop the reference count back to zero (without destructing the object) prior to the QueryInterface call.

It works, but in my opinion it relies too much on programmer vigilance. The default for ATL is to hand programmers matches and relying on programmers "knowing" that something dangerous might happen inside the FinalConstruct and having the presence of mind to ask for DECLARE_PROTECT_FINAL_CONSTRUCT. In other words, it chooses the dangerous default, and programmers must explicitly ask for the safe version. But programmers have a lot of things on their mind, and forcing them to consider the consequences of the transitive closure of every operation performed in the FinalConstruct method is an unresonable requirement.

Consider our example above. When the code was originally written, the Load method may have been the much simpler

HRESULT MyObject::Load()
{
 CComPtr<IStream> spstm;
 HRESULT hr = GetLoadStream(&spstm);
 if (SUCCEEDED(hr)) {
  hr = LoadFromStream(spstm);
 }
 return hr;
}

It wasn't until a month or two later that somebody added site support to the Load and Save methods. This seemingly simple and isolated change, adhering perfectly to the COM rules for reference counting, had ripple effects back through the object creation and destruction code paths. If you put four levels of function calls between the FinalConstruct and the Load, this fourth-level-caller effect can very easily be overlooked. I suspect that these nonlocal effects are one of the most significant sources of code defects. ATL was being clever and optimized out an increment and a decrement (something which the compiler most likely could optimize out on its own), but in return, you got handed a book of matches.

(I don't mean to be picking on ATL here, so don't go linking to this article with the title "Raymond rails into ATL as a poorly-designed pile of dung". ATL is trying to be small and fast, but the cost is added complexity, often subtle.)

Comments (12)
  1. Anonymous says:

    Heh, I love ATL and I love running with scissors.

    About a year ago, one of my guys showed me some of his ATL code with the DECLARE_PROTECT_FINAL_CONSTRUCT used. I asked him what it did and he really couldn’t tell me. I looked at the MSDN documentation and it gives a physical description of what it does without talking about WHY it is important. (Protects your FinalConstruct by using a temporary reference count??? WTF???)

    Given that I am subject to bouts of stupidity, I pulled the old "well back in my day, we didn’t need no fancy stuff like them there horseless carriages."

    Now that I understand WHY it exists, I’m going to start using that macro. Thanks Raymond.

  2. Anonymous says:

    Hey, I saw lot of 42 in MS sample code, like this one,

    enum { DESTRUCTOR_REFCOUNT = 42 };

    where this magic 42 comes from? and people like to use foo, bar to name their classes,

    where these two babies come from?

    Thanks.

  3. Anonymous says:

    42 is the answer to life, the universe, and everything in Douglas Adams’ Hitchhiker’s guide to the galaxy series. The specific question the answer is for remains unknown..

  4. Anonymous says:

    "foo" and "bar" are not used in production code. There is a long history in software documentation to use these names as generic names, the same way you might use "x" and "y" in mathematics.

    Etymology of foo and bar:

    http://en.wikipedia.org/wiki/Metasyntactic_variable#Foo.2C_Bar_and_Baz

  5. waleri says:

    What’s wrong with simple call to AddRef() after the object is created with new and call Release() instead of delete?

  6. rburhum says:

    I thought that the "static HRESULT WINAPI CreateInstance" didn’t use delete, but Heapfree instead. The reason being that it would allow you to make the destructor private to enforce reference counting behavior as explained by Larry Osterman in http://blogs.msdn.com/larryosterman/archive/2005/07/01/434684.aspx

  7. I hope you were joking about using HeapFree to destroy a C++ object…

  8. Anonymous says:

    it is never a good idea to expose a partially initialized object. If you need an Initialize method, then you should probably use a factory method to make sure no one gets a reference to a partially constructed object.

  9. Anonymous says:

    I think ATL does this for a reason. As far as I can see, there’s no immediate performance benefit to initializing the m_cRef member to zero over initializing it to one…

    As far as I can see, the problem is that by the time you’ve called new or CreateInstance, your object has no *interface* references to it, and that’s what IUnknown’s implementation tracks.

    The fact that there’s a C++ pointer pointing to it is, I guess, an implementation detail.

    The ATL equivalent of your first example, when you add an explicit Initialize method (ATL has no support for parametrized constructors, so this is the only way to init an object with a specific state) looks like this, and has the same problem:

    [code]

    CComObject<MyObject>* pobj = 0;

    hr = CComObject<MyObject>::CreateInstance(&pobj); // refcount == 0

    if (FAILED(hr)) return hr;

    hr = pobj->Initialize(); // still dangerous!

    if (SUCCEEDED(hr)) {

    hr = pobj->QueryInterface(riid, ppvObj); // refcount == 1, all is well now.

    }

    return hr;

    [/code]

    The standard solution is bumping the refcount before calling any methods on the object, and then Release:ing before handing out the now-only reference.

    Ah, now I see that you’re talking about protecting FinalConstruct. That is a bit of a mess, but I like the way it’s configurable :-)

  10. Anonymous says:

    jbn: This *is* the implementation of a factory method. As I said, ATL doesn’t support parameterized constructors, so an explicit Init method is the only way (that I know of) to pass it initial state.

    – Kim

  11. Anonymous says:

    Heh. C++ is fundamentally broken anyway, so you are always playing with matches.

    ATL has one BIG BIG advantage over MFC. You end up with statically linked code that does not put C++ objects across a DLL boundary.

    Why is this good? Well, MFC objects have been changed several times. Which means that the objects on both sides of the DLL must be compiled the same way.

    Which is impossible to make happen in the real world of ActiveX OCXs loaded into browser pages.

    (You end up with random crashes of the browser at some point.)

    1. There can be only one OCX on a box, loaded through the registry. So path is (mostly) irrelevant.

    2. MFC OCXs must be linked to MFC as a dll, not as a static library. (Why? I’m sure there is some good reason.)

    3. Different versions of IE are compiled against the different versions of MFC.

    4. You cannot control the field deployment of versions of IE and versions of your software.

    5. You cannot control the deployment other OCXs that might be loaded and their version of MFC.

    (I’m missing some of the obvious facts, like that there can only be one version of the MFC dll loaded into a process space. And that there is no way to query a C++ object to detect object compilation inconsistencies.)

    So, you end up with a no win situation. Upgrade to the latest version of the compiler, you won’t work with legacy apps that the customer has.

    The only real answer is to rewrite your OCXs away from MFC into ATL, or something else.

    And all because someone had to change MFC objects that cross a DLL boundary. At least ATL doesn’t do that. Reference count bugs, I can fix those. DLL hell deployment issues? Rewrite.

  12. Anonymous says:

    Are you still using raw pointers? Try smart pointers. Suppose class Foo below properly initializes itself with a 0 reference, as all good classes should:

    SmartPointer<Foo*> pNewFoo = new Foo();

    Guess what happens as soon as I point pNewFoo at the new Foo()? The reference count is bumped, and you can be sure it won’t be leaked. It works dandy.

Comments are closed.