Concurrency, Part 15 – Wrapping it all up.

Today I want to wrap up my concurrency series (finally).  There are some more topics I'll be covering in the future (like the debugging concurrency issues I talked about yesterday :)), but I think I've said a reasonable amount about the issue (and frankly, I'd like to go on to other topics).

The most important thing to realize about concurrency is that while programming for concurrency is harder than programming in a single threaded environment, if you follow a relatively straightforward set of rules, it's not that much harder.

My series started off with a discussion about concurrency and what concurrency is.

I next introduced my principles of concurrent programming:

1: If your data is never accessed on more than one thread, then you don't have to worry about concurrency.

2: Critical sections can be your best friends (unless they're your worst enemy).

3: Know your lock order and never, ever violate it.

4: Don't call into other objects with locks being held, and make sure that you reference count all your objects.

5: Reference counting is hard if you're not really careful

Next, I talked about some of the reasons for introducing concurrency into your application as an introduction to discussing scalability

Finally, I discussed what it means to use concurrency as a mechanism of achieving application scalability, and I talked about some of the APIs that are useful when writing scalable applications.

And then I talked a bit about determining when you've got a CPU based scalability issue.

Finally, I spent two articles talking about hidden scalability issues - what happens when components out of your control have scalability issues, and what happens when your code collides with the design of the computer itself to cause scalability issues.

Those articles led to my final principle: If you're looking to concurrency to make your application scalable, you need to be really, really smart - there are bottlenecks in places you didn't think about.

I also included a short article about the CLR and concurrency, and some other odds and ends.

Other resources I've come up with over the course of this series:

Eric Lippert had a great example of how you can get concurrency bottlenecks here:

There's an ancient, but awesome (and still relevant) article on MSDN written by John Vert here:

Jeff Parker pointed out the following posts about the CLR's threading model by Rick Brewster: Article 1 and Article 2.

Comments (7)
  1. Anonymous says:

    Can we release this as a pocket bible for concurrency? 🙂

    Though at current stage, I don’t have to worry about concurrency, but it was an insightful series. Which series is coming up next?


  2. Anonymous says:

    Well, here’s an app design question somewhat related to concurrency. I worked on a video streaming application that handles up to 16 video streams (but in most cases the number of video streams is 4 or 8), and for each video stream, there’re the following major tasks:

    *receive packets from the network

    *save video to disk

    *decode (bottleneck)

    *display (bottleneck)

    How would you arrange things (threads and whatever) in this application for maximum performance ?

  3. Anonymous says:

    Wasn’t there a Part 0? I think it was the spot the bug one w/ the 2 threads for copying a file or something?

  4. Anonymous says:

    Larry Osterman’s Concurrency Series This is the last post in a series about concurrent programming….

  5. Anonymous says:

    I was almost expecting "the future of concurrency" and an intro to Cw (that’s supposed to be an omega) after the CLR post. This was good. I enjoyed it and it left me wanting more. Always leave ’em wanting more.

    Thanks, Larry.

  6. Anonymous says:

    > How would you arrange things (threads and whatever) in this application for maximum performance

    Let me answer that question with another question: why is MAXIMUM performance important to you? Isn’t your goal to achieve ACCEPTABLE performance?

    Spending time and money to make code faster than it needs to be is stealing time and money from robustness, security, bug fixing, more feature work, and plain old returning unspent cash to shareholders.

    To actually answer your question — if you want MAXIMUM performance, don’t fool around with the NT thread scheduler. It was designed to work well for general-purpose application programming, on machines being used for everything from serving web pages to playing nethack. You’re going to waste entire microseconds in badly timed context switches.

    If you want MAXIMUM performace, write your own custom thread scheduler specifically tailored to your task — preferably in tightly optimized assembly. A deep knowledge of fibers will come in handy. Good luck!

  7. Anonymous says:

    Well, this year I didn’t miss the anniversary of my first blog post.

    I still can’t quite believe it’s…

Comments are closed.

Skip to main content