Estimating Project Costs - Not a Black Art

How do you estimate the resources required for a software project? I’m guessing most readers use the tried and [not so] true method of “Taking a WAG”.

Wild-ass-guesses aren’t always bad; for small work items they can be surprisingly accurate. But the larger the project gets, the wilder the guesses… At the other extreme are methods such as that used in the Personal Software Process, which can involve designing the system almost down to individual lines of code, and estimating based on an individual dev’s rate of coding (in LOC/day, for example).

I don’t like either of those methods – the former is too wild, the latter too time consuming. Here’s one that seems to work really well and doesn’t take much effort. Wideband Delphi is actually rooted in an old method that predates computers, and it works for many processes, not just software.

In a nutshell, what this method involves is getting estimates from a few “experts”, then sharing/discussing assumptions, and then repeating. This process gives you several “rounds”, with each round yielding two things:

  1. More detailed assumptions about the project – these actually form the basis of a design for the system.
  2. Estimates from each expert – as the design/assumptions are solidified, you will see these estimates begin to converge.

One important factor here is to not prematurely share the estimates that people give– just share assumptions. That way, the participants don’t influence each other’s estimates. After a few rounds, you’re left with an estimate that is actually a good approximation based on the “collective wisdom” of the participants.

Let’s take a ficticious project and see what the assumptions and cost estimates might look like after a few rounds. Let the project be originally defined as “A tool to sort numbers” (typical vague spec :))

Round 1:
A tool to sort numbers: (This can’t be hard: 2 days)

Round 2 (we ask some questions about what kind of tool, etc):
A GUI tool to sort numbers (GUI, more expensive: 3 days)
Numbers read from a file (need a UI to select the file – 0.5 days)
Parse the file (0.5 days)
Needs to be fast (need to select a good algorithm – 2 days)

Round 3 (delving deeper, asking more specific questions):
A GUI to sort numbers (3 days)
Need a deployment project to install it (2 days)
Needs to create shortcuts in start menu, desktop (1 day)
Numbers read from a file
Simple file format – parse it (0.5 days)
Turns out file can be very large
Need to handle file much larger than vmem (5 days)
Algorithm needs to be able to split/join data
Needs to be fast in all cases
Different algorithm depending on nature of data (Need dynamic algorithm selection - 1 day)
Don’t code quicksort – can use C library (0 days)
Need to code mergesort (3 days)
Need to code radix sort (3 days)

Etc…

So each participant would have notes like this with their associated costs. As you can see, in this particular case the costs keep growing because the design refinement is showing that more work is required (but occasionally it will go the other way).

By entering each round of costs into a spreadsheet and charting rounds along the Y axis and estimate values along the X axis, you will hopefully see a skewed cone shape as your estimates begin to converge on a point after several rounds. When you’re satisfied with the variance of your data, take the average and there’s your estimate. And the huge bonus is that you’ve answered some very important questions regarding your app design.

To get a better idea of how the process works, check out this article:
https://www.processimpact.com/articles/delphi.html

Anyone have other good cost estimation techniques to share? If you’ve tried this method, I’d love to hear your results.

Avi