Eli Robillard proposes Planning and Implementation should be separate projects. In the comments to this post, M. Keith Warren argues that it is impossible to write a full specification of a large (6+ month) project.
I would agree with Keith, and that is why it should not be attempted. Eli’s post also brings up an important shortcoming of the current state of affairs of many current software development projects. Too often software is not managed by a management team, it is managed by the software developers.
Change Be Damned
What Keith talks about is closer to the Waterfall design method, one that I believe by now is largely viewed as an evil monster. You cannot conceivably know everything that will happen with a software project over 6 months when your competitors are racing to deliver functionality before you. You might not know that in 6 months a new product will be released that could cut your development time in half. Your salesmen are out in the field talking to customers about what they want, and often promising features that are not even currently under consideration by the development teams. As the old adage goes, the users won’t know what they want until they see what your software doesn’t do. We don’t code in a vacuum, we have to be responsive to environmental shifts that could drastically alter our projects.
What the industry has not done is to universally embrace a set of concepts to replace complete forward design. We software developers quickly draw analogy to physical building construction, likening creating software to that of building a house or skyscraper. This analogy breaks down quickly when requirements shift. Instead, we should liken software projects to that of an integrated office complex where the adjoining land has not been puchased, or to that of building a city block where neither the tenants nor the zoning permits have been identified or procured.
Things change, both in the virtual and physical worlds. The root difference between the industries mentioned is that the norm for physical construction is rigid change control, while the virtual world does little to manifest change. The construction phase of a software project is much more fluid than the construction phase of a physical structure. Physical structures are pre-designed, most requirements are known before, and proposed changes to architecture are carefully evaluated in terms of structural impact and cost before they are implemented. Not only are the requirements more rigidly conveyed, but the process through which those requirements are updated to accomodate environmental shifts is also rigidly managed.
If you have ever built your own home, you have likely experienced this. You tell your builder, “I really like the white builder-grade cabinets that you installed, but I think we should make those solid cherry. And the counter tops… the laminate works, ut what if we change that to marble instead?“ The builder’s reaction is 100% guaranteed: “it’ll cost you, here is our pricing sheet for upgrades.“ Now, tell the builder that you reconsidered living on a slab and that you want a full-sun basement, even though the slab is already poured and the timbers are framed. No catalog in the world will have a pricing sheet for this. To accomodate any change, you must fill out a change request. Over the course of building your home, you are likely to have many change requests, everything from moving a towel rack to an opposing wall to installing a different ceiling fan, from widening a closet to moving a bedroom’s location.
The architect for a physical structure draws elaborate plans, conveys them to their paying customer, revises them, and continues through this evolution for a period of time until the design of the structure is agreed upon. One agreed upon, changes to that architecture are managed through a change control process. Pricing is tied directly to change request forms, and those forms are filed along with every other bit of paper related to your contract. Before construction begins on a physical structure, the price of materials can be established down to the last box of nails to provide a very real estimate of the final cost. The builder then starts assembling construction crews and defining schedules for realizing the plans that are agreed upon.
Contrast this to the software world. Change control to most software developers implies “Visual Source Safe“ or some other source code control provider. We don’t often consider tools that enable us to capture changes in requirements and directly tie them to changes in source control. Our job is to build software, changes should be up to the project managers. Our change control list often consists of an Excel spreadsheet that intersperses “Bug“ lists with “Feature Requests.“ Simply put, there typically is not a formalized process that manages change. Estimates on software are seldom solid because requirements change. This is perpetuated by incompetent software managers who simply want to update a Gantt chart to reflect progress.
We know that software changes. We know we cannot reasonably architect business solutions in many cases for 6+ months out when requirements are constantly evolving around us. Instead, we should embrace that fact and consider processes that, in turn, lend themselves to embracing change. To the point, we should not let unmanaged change drive our projects.
The Rational Unified Process addresses this concept of change. Instead of attempting to solve 100% of the projects requirements, RUP evolves software iteratively. This is more akin to Rapid Application Development (RAD), where you code to a comprimised but known feature set and evolve the project over time. The biggest difference between RAD and RUP, though, is that you identify the requirements for the entire system but only flesh out the top 20% architecturally significant cases during a single iteration. This helps you identify candidate architectures and to construct the architecture early that will accomodate the remaining 80% without compromising your longer term goals.
RUP identifies different roles within a project. Long before the architect ever gets involved, an analyst is working on flushing out use cases and evaluating them based on significance. Long before the coder is developing, an architect works with the analyst to develop a candidate architecture to satisfy requirements. Requirements are modelled and versioned in a tool like Requisite Pro and Rational XDE to build traceability from design to requirements. Developers are able to start integrating portions of the architecture, and testers are able to build test scenarios to ensure requirements are functionally met. yet these roles do not act sequentially, as they do in many projects. These roles have a lot of overlap.
Inception is where you gather requirements and document them, forming use cases. The bulk of the work lies in the Business Modeling and Requirements disciplines, while the Analysis and Design discipline plays a very minor role. The next phase, Elaboration, is where the Architect and Lead Designer spend time modeling the proposed system. Business modeling still occurs, but not as much as it did during Inception. Implementation also takes on more responsibility during Elaboration, and peaks during Construction.
The point of this diagram is to note that the Requirements discipline doesn’t stop… requirements are continually evolved and rechecked during the entire iteration. And a single iteration does not attempt to solve all problems for a complete system: it attempts to solve a small subset of problems that are scoped for this single iteration. A project spanning 1 year may actually consist of 3 or more iterations.
Project management is much more than watching the lines on a Gantt chart grow: it also consists of identifying variances in schedules, identifying their source, and accomodating for factors that impact schedules. Project managers should be involved in the change management process, estimating impact on existing schedules and altering deadlines accordingly. But project management for software projects is often seldom more than inserting new tasks into an already tight schedule and asking for overtime from a few dedicated workers. Software differs greatly from physical structure development based on the management capabilities alone.
Lack of Tools
So, you may be thinking that I am preaching to the choir. We know that we have to embrace change and adapt, but it is just hard to implement managed change in our organizations. We don’t have a suite of tools to accomodate the software development process.
What I would love to see Microsoft offer is a new tool set that integrates, at a minumum, Visual Source Safe, and Visual Studio .NET to easily tie requirements to both design models and source code. If I accept a change request, the acceptance should not be based on a thumb in the air approach. I should be able to review the current use cases for my system, identify the realization of those use cases within sequence diagrams and a physical component model, and the model should be tied to the source code. This is what the Rational suite of tools offers through Requisite Pro, Clear Quest, Clear Case, and Rational XDE. But these products fall way short of integration into the Visual Studio .NET IDE (Rational XDE attempts this, but still has a long way to go). Traceability of requirements to the source code bits would allow Microsoft developers to be better equipped to estimate the impact of change on their systems. The promise of Whitehorse helps deliver on traceability of design and source code, but we still lack traceability back to requirements.
We are completely focused on the Construction phase of project management without focusing on change management. Hopefully Microsoft has some plans to solve that, as Rational still seems clueless about customers outside Fortune 100 companies.