Steve here. A reader of our 2008 Software Development’s Classic Mistakes White Paper made the following observation:
I work in the Aerospace/Defense industry and have read your article called Software Development’s Classic Mistakes 2008 dated July 2008. I am most interested in questioning the results of your most damaging classic mistakes overall that is tabulated in Table 8. I have read that up to 70% of project failures can be attributed to incomplete and poorly communicated requirements. Furthermore, the root cause of more than 50% of all errors identified in projects are introduced during the requirements analysis phase.
Could you please shed some light as to why the results of your study don’t cite mistakes that are attributed to requirements? Is this embedded in one or more of the tasks or is this a non-issue?
The reader is correct that multiple industry studies have found that requirements problems are the most common source of project challenges, so I can see why our results might seem anomalous.
The fact is that people who took our survey were given the chance to rate requirements as severe classic mistakes, and they just didn’t. We included several classic mistakes in our study related to requirements:
- Feature creep
- Shortchanged upstream activities
- Lack of user involvement
- Unclear project vision
- Requirements gold plating
Of these requirements-related mistakes, feature creep made the overall top 10 list (at #7). It was also the 6th most commonly reported mistake. None of the other requirements-related mistakes made the top 10 list for frequency, and none of them including feature creep made the top 10 list for severity.
Based on our consulting experience I am not that surprised to see non-requirements mistakes percolate to the top of the classic mistakes list. Some of the other studies I’ve seen didn’t offer the option to choose some of the problems we listed in our survey, which means their survey respondents didn’t have the chance to rank them higher than requirements problems.
Some studies I’ve seen survey only project managers, which could give a one-sided view of which mistakes are most common. And many of the surveys I’ve seen focus only on business systems projects (most notably, the Standish Group survey), whereas our data set was for a broader set of projects.
We’ve also found in many cases that requirements problems are symptoms of other issues, such as overly optimistic schedules (leading to shortchanging requirements), unrealistic expectations (same issue), short-changed QA (don’t detect requirements problems until late), etc.
We don’t have a classic mistake called simply “bad requirements” or anything comparable to that. Maybe we should add that.
Classic Mistakes Update
We’re updating our Classic Mistakes survey in 2010. Help update these results, and take the survey!
> > > > >
Thanks to Steve for this post! And everybody, add your voice to the results by taking Steve’s new Classic Mistakes survey. Here’s info about the survey:
Steve McConnell first introduced the concept of classic mistakes in Rapid Development. Classic mistakes are ineffective development practices that have been chosen so often, by so many projects, with such predictable results that they deserved the name. For example, one common classic mistake is starting a project with an overly optimistic schedule.
Construx has recently updated Steve McConnell’s original list of classic mistakes based on our training and consulting experiences over the last 10+ years. We are interested in your input on these 42 classic mistakes. We plan to collate the results later this year and provide a summary report. Results from this year’s survey will be available in early 2010. Results for last year’s survey are currently available on Construx’s web site.
In the following survey, we are solely interested in your personal experiences with these classic mistakes. If you believe a mistake to be common but you have not personally experienced it in the last three years or last five projects (whichever is shorter), please just answer “don’t know” or “not applicable.” We expect “don’t know / N/A” to be the most common answer on this survey.
We have included additional space on each page so that you can comment on the classic mistakes we’ve listed or suggest other classic mistakes you’ve seen.
The survey typically takes between 20 – 40 minutes to complete.
Your candid and thoughtful contributions are valued. No one outside of Construx will see any of the raw data. The information you enter will be presented only in the form of summary statistics, combined with other people’s information.