(OffTopic) ‘ThreadSafe’ Classes

[Or: Rant - Code-Reviewing Random Code on the Internet.]
I’m sure I’ve stumbled upon this post before once, and I hit it again today.

I think it’s a little unfortunate this appears so high up in the results, as the post has a bunch of problems. I don’t want to bash on it too hard, as it did one or two things that are nice. One is using IDisposable as a substitute for C# lock() statement when using ReaderWriteLockSlim. This is good. Both lock() and using(IDisposable) are prettier and easier ways to do an exception-safe try{}/finally{} pattern without writing boilerplate code (and bugs in it), which is something to be happy about. I guess it’s implemented like this:

public class WriteLock : IDisposable
{
    private ReaderWriterLockSlim objLock;

    public WriteLock(ReaderWriterLockSlim objLock)
    {
        this.objLock = objLock;
        this.objLock.EnterWriteLock();
    }

    public void Dispose()
    {
        this.objLock.ExitWriteLock();
    }
}

However, from here on things get a little shady.

Class Design – ‘Racy’ Operations

The first thing that bugs me is the public surface of ThreadSafeQueue.

public class ThreadSafeQueue<T>

{

    public int Count

    {

        get

        {

            using (new ReadOnlyLock(this.objLock))

            {

                return this.queue.Count;

            }

        }

    }

 

    public bool Contains(T item)

    {

        using (new ReadOnlyLock(this.objLock))

        {

            return this.queue.Contains(item);

        }

    }

 

    public T Peek()

    {

        using (new ReadOnlyLock(this.objLock))

        {

            return this.queue.Peek();

        }

    }

 

    public T Dequeue()

    {

        using (new WriteLock(this.objLock))

        {

            return this.queue.Dequeue();

        }

    }

Each of these operations is ‘thread-safe’ in exactly one way – it prevents the internal state of the Queue object this.queue from becoming corrupted.
However, none of these operations is actually helpful in writing code that is robust against race conditions. Consider a piece of code that tries to use .Peek() :

 

    T obj;

    if (threadSafeQ.Count > 0 && threadSafeQ.Peek() != null)

    {

        obj = threadSafeQ.Dequeue();

    }

 

Would this code work as intended?  No! Well not unless you intended it to throw InvalidOperationException at random!

Is there any redeeming situation where using .Count or .Peek() is useful, and any reason to include them in the class?
Well yes – I can think of two so far:
-you can use Count or Peek() to output information about your queue’s state for human consumption. Of course this information can become out of date at any time, but sometimes it’s nice just to have a rough idea, for instance as monitoring information.
-it makes it a fraction more convenient when you want to poke at the queue in the debugger Immediate window, when you are debugging your race conditions…

Bizarrely, the class includes definition for both .Dequeue() and . TryDequeue() despite the fact that its implementation of .Dequeue() is clearly a best-effort operation which will throw upon failure. It’s bizarre because .TryDequeue() communicates that intent much more clearly, and so makes .Dequeue() redundant.

[Aside: There is a completely different behavior/semantic that could have been taken for dequeue which does not throw-upon failure, indeed it never fails – however the calling thread becomes blocked as long as no queue item exists. This would be an instance of the Monitor pattern for concurrency, with waiting and signalling.]

Last comment on ThreadSafeQueue, there is one method I suspect has an actual out-and-out bug in the class. If you read the whole class, did you spot it?

 

    public bool TryDequeue(out T obj)

    {

        using (new ReadLock(this.objLock))

        {

            if (this.queue.Count != 0)

            {

                obj = this.Dequeue();

                return true;

            }

        }

 

        obj = default(T);

        return false;

    }

ReadLock is probably the correct construct to acquire an upgradable-read-lock in order to do a safe call to .Count. However, in order to call .Dequeue() , you also need to acquire WriteLock.

OK, so far we only covered the first class of the post: ThreadSafeQueue. What about ResourcePool<T> ? Any bugs there?

Well, for a start, let’s look at .Get() .

/// <summary>

/// Retreives a resource from the pool.

/// </summary>

/// <returns></returns>

public T Get()

{

    T obj = null;

    try

    {

        int tries = 0;

        while (obj == null && (this.MaxRetry == 0 || tries++ <= this.MaxRetry))

        {

            if (Queue.Count == 0)

            {

                if (this.MaxResourceCount == 0 || (this.MaxResourceCount > this.ActiveResourceCount))

                {

                    if (this.Creator == null)

                        throw new NullReferenceException("Unable to create new Resource Instance");

 

                    obj = this.Creator();

  FireEvent(ref this.CreatedResource);

                }

                else

                {

// wait for one to become available if we can't create one.

                    Thread.Sleep(this.IterationTimeout);

                }

            }

            else

            {

                obj = Queue.Dequeue();

            }

        }

    }

    finally

    {

        Interlocked.Increment(ref this._ActiveResourceCount);

    }

 

    if (obj == null)

        throw new NullReferenceException("Unable to retrieve Resource Instance");

 

    return obj;

}

  1. It falls victim to the trap of using the unsafe .Dequeue() function: .Get() may throw InvalidOperationException if it is called from multiple threads simultaneously.
  2. It will also increment ._ActiveResourceCount even in the case where it failed to acquire a resource and it is throwing NullReferenceException. If the exception is being caught, this, combined with a MaxResourceCount > 0 could eventually conspire to deadlock the system.
  3. It’s rather bizarre the function throws NullReferenceException in the case where resource allocation fails, instead of some more logical exception.
  4. It may just be a documentation bug, but the condition (this.MaxResourceCount > this.ActiveResourceCount) is serving to mediate the number of objects not in the queue, instead of the number of objects in the queue, as stated in comments.

What about the counterpart of .Get() , .Release() ?

/// <summary>

/// Returns a resource to the pool.

/// </summary>

/// <param name="obj"></param>

public void Release(T obj)

{

    try

    {

        if (this.MaxResourceCount == 0 || Queue.Count < this.MaxResourceCount)

        {

            if (this.CheckResource == null || this.CheckResource(obj))

            {

                Queue.Enqueue(obj);

            }

        }

    }

    finally

    {         //decrement counter so we know we can reuse the resource.

        Interlocked.Decrement(ref this._ActiveResourceCount);

    }

}

In the case where Queue.Count > this.MaxResourceCount (which can happen since the actual function of MaxResourceCount is to limit the number of objects not in the queue, the object doesn’t actually get returned to the queue) the resource doesn’t get returned to the queue at all. And one guesses might just get leaked? Although it depends on the exact resource we are protecting whether it is ok to leak it or not, for some reason this worries me.

There are also some more interesting questions: when is it correct to Dispose() this class, which inherits IDisposable? And did this class need to inherit Disposable? Why?

Since it’s a resource shared between threads, the logical safe time to Dispose() it is when all threads have finished using the ResourcePool<T>. Let’s consider these points.

  • Neither Get() nor Release() actually does any verification that the resource pool has not already been disposed, which may make accidental unsafe usage of the class hard to detect.
  • calling Queue().Clear() [and therefore ResourcePool.Dispose() ] does neither a) directly release unmanaged resources nor b) call Dispose() on its own Disposable members, which would be the main motivations for implementing IDisposable in the first place. So it seems like the implementation of IDisposable() is either unnecessary, or incomplete.

By this point I’ve mostly lost interest in understanding the rest of the code, but I do think I noticed one last inspired bug(?).

/// <summary>

/// This is a required callback for the Resource pool.

/// </summary>

/// <param name="obj"></param>

/// <returns></returns>

private bool ValidateThread(Thread obj)

{

    return false; //never reuse threads

}

 

Never reuse threads? Hmm. So, just what was the point of having a ResourcePool<Thread> to act as a thread pool in the first place?… Never mind.

I’m feeling kind of guilty for ranting about someone else’s code like this. And also linking to it will probably just make it appear higher up in the search results, which is counterproductive. But please don’t hate me - concurrency bugs get me fired up so that I just couldn’t resist ranting!