There are several things that architects do when building and delivering solutions but one of the core responsibilities often seems to get lost early in the project lifecycle – ensuring system quality. For this reason, I thought it’d be interesting to talk about how this should be done to bring system quality back to the forefront of an architect’s thinking. Here are a few ways in which an architect can address system quality:
- Building system quality processes into the solution design processes,
- Building architectures that optimize system quality that are then consumed by system designers,
- Establishing responsibilities for system quality on risk management teams,
- Including system quality issues in issue tracking management,
- Including system qualities into a solutions requirements
All of these are warranted in my mind and demonstrate comprehensive thinking of nifty techniques for improving system quality. One thing that seems to be missing here which is an area I’ve been recently thinking of is managing system quality. Someone smart once stated “You can’t manage what you can’t measure”. I’ve experimented with measuring system quality for a couple of years now and think that there is great benefit for building high-quality systems but unfortunately isn’t currently well exploited in the architect community. The ability to measure system quality, at least theoretically, provides the ability to help predict the lifespan of systems once released into production. This goes beyond traditional testing of systems. That is, testing systems today usually involves functional, security (or penetration), failover, integration, code coverage, etc testing. This helps ensure that the system will function once released into production. What’s missing here is some level of assurance that the system will survive business or IT changes. I assert that if a system design optimizes system qualities such as Flexibility, Reusability, Testability, Maintainability, Interoperability, et al that it can withstand changes better and therefore has a longer lifespan and therefore potential greater ROI.
Software Engineering Institute’s Atrribute Tradeoff Analysis Method (ATAM) is a great idea for understanding the tradeoffs for system quality. I once worked with a couple of sharp individuals to create a simplified method of measuring system quality called Microsoft System Review (MSSR). Although both are useful and successful in their usage they both require a bit more to provide the necessary process and tools for an architect to manage system quality. For this reason, I’ve been thinking of an approach which I’ve called System Quality Attribute Plan. I don’t know if this name will stick…my idea is in the really, really early development stages. Anyway, SQAP has a few components:
- SQA Target Capture which is a method for capturing the target measurements an architect expects the system to achieve
- ADD Framework which is a simplified design process derived from SEI’s Attribute Driven Design, Pattern-Driven Analysis and Design and a dash of some of my ideas. The point of the ADD Framework is to provide a design process driven by system quality.
- SQA Review which is the process for measuring a system’s quality and tracking the results which is largely the Microsoft System Review (MSSR).
SQAP is a rough approach at taking a stab at measuring system quality. It is not baked but more of an idea of mine. What is yours?