Lessons from the Dreamliner?

Up here in Seattle, Boeing is a very important company. Now that their Dreamliner is going through some "teething pain," as they phrased it, I read a little bit about how this happened. After all, they had spent years designing the plane, built simulators, tested all the parts but still had a problem once the plane actually took to the air.

The article I started with is here at the Harvard Business Review: https://blogs.hbr.org/cs/2013/01/the_787s_problems_run_deeper_t.html. It reminded me of the challenges of software testing, so I gave it a thorough read a few times seeing what I could learn from it.

What really caught my eye started in Allworth's 8th paragraph in which he talks about integrating all the parts together and the testing needed there. That is essentially the same integration problem we face as testers. A team may give us code for some shared feature - I'm thinking spell check, but you can also think file I/O, http communication or whatever you want - and we have to ensure our implementation of their code works as designed. Typical errors are that we implement their code incorrectly, we uncover a bug in their code, or there is some unique feature to OneNote for which the original code was not designed.

So that part of this article wasn't new to me. And Allworth's reminder that if you wind up in the situation in which you and your partner team have to make changes you are no longer dealing with only an engineering problem. Allworth points out corporate boundaries, lawyers and suppliers. The closest analogy here is time constraints for the partner team, relative priorities and we can leave out lawyers. I think his point is the same. When the design changes for just OneNote, we have to adjust what we do, but we control everything about OneNote. When the design changes for us and a partner team, we lose direct control over half the equation.

But I knew that already since this has happened several times in the past.

Looking at the 10th paragraph it appears that the parts coming from the various suppliers would not fit together. This reminds me of the Mars Orbiter failure which I have written about before:

https://articles.cnn.com/1999-09-30/tech/9909_30_mars.metric.02_1_climate-orbiter-spacecraft-team-metric-system?_s=PM:TECH since my initial thought was the unit testing could have (should have?) prevented this problem. But if the specifications were not clear, you have to expect variance. So this, if true, would simply point to the need for a clear design before starting production. If all the parts came in at the correct and expected size - none of the linked articles clarify this question - then we can move the discussion back to integration concerns.

So I am still thinking about this. I don’t think I have made too much progress here but this is a fascinating problem that closely mirrors software testing. I'll keep thinking about it.

Questions, comments, concerns and criticisms always welcome,

John