Update: this blog is no longer active. For new posts and RSS subscriptions, please go to http://saintgimp.org.
This is a little late but there was an interesting internal thread about Tom DeMarco’s recent article in IEEE Software entitled “Software Engineering: An Idea Whose Time Has Come and Gone?” In it he recants his early writing on the topic of metrics and control in software engineering projects and says that software projects are fundamentally experimental and uncontrollable. Consistency and predictability in software development are largely unattainable goals. Much of what we think of when we think about “software engineering” is a dead end.
Microsoft is fairly big on software engineering so the article caused a bit of a ruckus with some folks. Someone snidely noted that software engineering seems to have worked well for Windows 7 and questioned DeMarco’s motive for writing the article. I thought Windows 7 was an interesting example to bring up. How could a gigantic software project like Windows function without software engineering? What does it have to say about DeMarco’s claim that software engineering is dead?
The Windows 7 project is widely considered a success, especially compared to the Vista project which had many well-publicized problems. Now, I don’t know much about the Windows engineering process because I’ve never worked on the Windows team, but my impression from the comments of people who are familiar with it leads me to believe that the success of Windows 7 vs. Vista was due in large part to a reduction and streamlining of process and a change in focus from centralized to decentralized control.
For example, here’s a blog post by Larry Osterman (a senior Windows developer) where he says:
This is where one of the big differences between Vista and Windows 7 occurs: In Windows 7, the feature crew is responsible for the entire feature. The crew together works on the design, the program manager(s) then writes down the functional specification, the developer(s) write the design specification and the tester(s) write the test specification. The feature crew collaborates together on the threat model and other random documents. Unlike Windows Vista where senior management continually gave “input” to the feature crew, for Windows 7, management has pretty much kept their hands off of the development process.
Larry goes on to describe the kinds of control and process that the Windows 7 engineering system did have, and of course there was still a lot of it. But my impression (again as an outsider) is that the process was directed more toward setting a bar and leaving the precise means of hitting the bar up to the people doing the work.
In DeMarco’s article, he says:
Can I really be saying that it’s OK to run projects without control or with relatively little control? Almost. I’m suggesting that first we need to select projects where precise control won’t matter so much. Then we need to reduce our expectations for exactly how much we’re going to be able to control them, no matter how assiduously we apply ourselves to control. […] So, how do you manage a project without controlling it? Well, you manage the people and control the time and money.
Seems like when you compare the engineering process of Vista vs. Windows 7, Vista was more about management trying to directly control the details of the engineering process and Windows 7 was more about managing the people, time, and money.
Larry wrote about Windows 7:
A feature is not permitted to be checked into the winmain branch until it is complete. And I do mean complete: the feature has to be capable of being shipped before it hits winmain – the UI has to be finished, the feature has to be fully functional, etc. […] Back in the Vista day, it was not uncommon for feature development to be spread over multiple milestones – stuff was checked into the tree that really didn’t work completely. During Win7, the feature crews were forced to produce coherent features that were functionally complete – we were told to operate under the assumption that each milestone was the last milestone in the product and not schedule work to be done later on. That meant that teams had to focus on ensuring that their features could actually be implemented within the milestone as opposed to pushing them out.
and Tom wrote in his article:
You say to your team leads, for example, “I have a finish date in mind, and I’m not even going to share it with you. When I come in one day and tell you the project will end in one week, you have to be ready to package up and deliver what you’ve got as the final product. Your job is to go about the project incrementally, adding pieces to the whole in the order of their relative value, and doing integration and documentation and acceptance testing incrementally as you go.”
Sounds to me like they’re saying the same thing.
I think DeMarco’s article upset some people because of the provocative title but the content really does match real-world experience, even on gigantic projects. He’s not saying that engineering (broadly defined) is bad or that all control is bad, and he’s not denying that processes needs to scale with the size of the project. Rather, Tom’s famous statement that “you can’t control what you can’t measure” has been used to justify a lot of sins in the name of software engineering, and it’s that unbalanced and control-freak-oriented brand of engineering that he’s saying is dead. Judging from Windows 7, I’d say he’s absolutely right.
We can’t predictively measure or control some of the most important aspects of a software development project, and if we try, we soon discover that the “observer effect” from physics applies to us as we end up changing the thing we’re trying to measure (and inevitably not for the better). The best we can do is to anecdotally evaluate how we’re doing by considering our past, inspecting our results, and making course corrections for the future.
That’s what agile software development is all about.