My Response to Nat's "Threads Considered Harmful" Post

I don't normally do this but Nat's post on Professor Edward A. Lee piece about The Problem With Threads drew a response from me straight away.

The comment I made on the post was never approved so I thought it was worth sharing here.

I Wrote...

In the summary of Professor Edward A. Lee paper - "he observes that threads remove determinism and open the door to subtle yet deadly bugs, and that while the problems were to some extent manageable on single core systems, threads on multicore systems will magnify the problems out of control. He suggests the only solution is to stop bolting parallelism onto languages and components--instead design new deterministically-composable components and languages." Benjamin then takes this comparison to the biological world.

It urks me when people need to feel so in control all the time.

Without wanting to enter into a philosophical debate I think we should caution ourselves about jumping to conclusions about the dangers of parallelism.

The irony is that there is a social perception in our society that woman can multitask and men cannot. Since men are the dominate force behind inventing computer languages it is of no surprise there is an intrinsic fear of parallelism. People can only easily memorise 7+-2 things (or groups of things) so to try and debug and track multiple threads is not mean feat for an inexperience (or in some cases experienced) programmer.

I worked building threaded systems in code for many years. Many were overly complicated and bugs were introduced occasionally which were difficult for others to track, test and fix.

From here I moved to building workflow driven applications that operated as state machines . The state machines could adapt to dynamic rules and were much easier to visualise, log and debug.

I guess you could argue that a state machine is a “deterministically-composable component” but once situations evolve and layers and layers of complexity are added, sometimes to an individual running instance (special case) and sometimes to the workflow for a period of time or sometimes forever based on changing demands. Working with systems like this (as I'm sure many of you do) you will become acutely aware of the similarities between the programming models that we use and the natural world and all its beauty and complexity.

If you believe in determinism even at the macro level it should then be theoretically possible to predict the lotto numbers each week based on the kinetic and physical forces involved or maybe some things are just random and we should feel comfortable in treating them that way.

So what if the result was a little unpredictable even if a computer was performing the task... doesn’t the wisdom of crowds sort this one out for us overtime anyway? Think about a computer farm of complex parallel processing running at 80% or 90% accuracy. Surely you could discount the difference as an "incorrect response" or better yet learn from the ambiguity.

In fact (if you believe in free will) maybe the subtle yet deadly bugs that professor Edward Lee is talking about are the spark that will create human like flaws in our inventions moving forward... meet pleo anyone?

What are your thoughts?

Tags:  languages parallel programming threads