Inspections are a white-box technique to proactively check against specific criteria. You can integrate inspections as part of your testing process at key stages, such as design, implementation and deployment.
In a design inspection, you evaluate the key engineering decisions. This helps avoid expensive do-overs. Think of inspections as a dry-run of the design assumptions. Here’s some practices I’ve found to be effective for design inspections:
- Use inspections to checkpoint your strategies before going too far down the implementation path.
- Use inspections to expose the key engineering risks.
- Use scenarios to keep the inspections grounded. You can’t evaluate the merits of a design or architecture in a vacuum.
- Use a whiteboard when you can. It’s easy to drill into issues, as well as step back as needed.
- Tease out the relevant end-to-end test cases based on risks you identify.
- Build pools of strategies (i.e. design patterns) you can share. It’s likely that for your product line or context, you’ll see recurring issues.
- Balance user goals, business goals, and technical goals. The pitfall is to do a purely technical evaluation. Designs are always trade-offs.
In a code inspection, you focus on the implementation. Code inspections are particularly effective for finding lower-level issues, as well as balancing trade-offs. For example, a lot of security issues are implementation level, and they require trade-off decisions. Here’s some practices I’ve found to be effective for code inspections:
- Use checklists to share the “building codes.” For example, the .NET Design Guidelines are one set of building codes. There's also building codes for security, performance ... etc.
- Use scenarios and objectives to bound and test. This helps you avoid arbitrary optimization or blindly applying recommendations.
- Focus the inspection. I’ve found it’s better to do multiple, short-burst, focused inspections than a large, general inspection.
- Pair with an expert in the area you’re inspecting.
- Build and draw from a pool of idioms (i.e. patterns/anti-patterns)
Deployment is where application meets infrastructure. Deployment inspections are particularly helpful for quality attributes such as performance, security, reliability and manageability concerns. Here’s some practices I’ve found to be effective for deployment inspections:
- Use scenarios to help you prioritize.
- Know the knobs and switches that influence runtime behavior.
- Use checklists to help build and share expertise. Knowledge of knobs and switches tends to be low-level and art-like.
- Focus your inspections. I’ve found it more productive and effective to do focused inspections. Think of it as divide and conquer.
- Set objectives. Without objectives, it's easy to go all over the board.
- Keep a repository. In practice, one of the most effective approaches is to have a common share that all teams can use as a starting point. Each team then tailors for their specific project.
- Integrate inspections with your quality assurance efforts for continuous improvement.
- Identify skill sets you'll need for further drill downs (e.g. detail design, coding, troubleshooting, maintenance.) If you don't involve the right people, you won't produce effective results.
- Use inspections as part of your acceptance testing for security and performance.
- Use checklists as starting points. Refine and tailor them for your context and specific deliverables.
- Leverage tools to automate the low-hanging fruit. Focus manual inspections on more context-sensitive or more complex issues, where you need to make trade-offs.
- Tailor your checklists for application types (Web application, Web Service, desktop application, component) and for verticals (manufacturing, financial ... etc.) or project context (Internet-facing, high security, ... etc.)
In the future, I'll post some more specific techniques for security and performance.