Concurrency, part 7 - Why would you ever want to use concurrency in your application?

So I've spent a great deal of time talking about concurrency issues, but one thing I've avoided mentioning until now is when do you worry about concurrency.

The first (and most common) time that concurrency matters occurs when your code lives in a DLL.  If your code is in a DLL, then you've got to worry about concurrency (unless you have some external contract that guarantees your code is only called on a single thread (like being a COM apartment model object)).  This is an inescapable rule, if you're in a DLL, you have no control over how your code is called, and you must assume a worst case scenario.  And there are times in a DLL when it's necessary to do work on separate threads - as I mentioned before, you can't use COM objects in a DLL, which means that all COM calls in a DLL have to be done on another thread (unless your code is living in a COM DLL, in which case you can safely assume that your caller's called CoInitialize for you).

But if you're writing an application, you still might want to use multiple threads.  The biggest reason is for convenience.  There are times when it's just useful to be able to kick off a chunk of work onto another thread.  This is especially true for applications that interact with the user.  Even though those applications spend 99% of their time idle, it's critically important that the application be immediately responsive to the user.  So any tasks that could conceivably take a long time (like opening a file) should be performed on a thread that's not interacting with the user.  That's why my copy of FrontPage has 7 threads running.  In fact, on my machine, the only application with only one thread is cmd.exe - all the other processes have at least two threads.  Outlook has 37 threads on my machine right now :).

And, of course, the third reason for writing for concurrency is for performance.  If your machine has only one CPU core, then adding multiple threads won't actually improve your performance (actually they'll hurt your performance due to the time spent waiting for the system to switch from one thread to another), but on a machine with more than one processor, if you have multiple CPU-bound operations that can be overlapped, then running them on separate threads can dramatically improve your scalability.

But if you take the latter tack (adding multiple threads to resolve performance bottlenecks), it's critical to realize that you're potentially walking into a minefield.  All the stuff I've talked about so far is pretty straightforward, and applies to all sorts of applications.  But when you start trying to adopt your code for high performance computing, it opens up a whole new world of potential bottlenecks and issues.

And that's what I'll spend some time talking about next - a rough primer on concurrency for scalability, and some of the issues associated with it.