When I need to quickly analyze a product and give actionable feeback, I use scenario evaluations. Scenario evaluations are basically an organized set of scenarios and criteria I use to test and evaluate against. It's a pretty generic approach so you can tailor it for your situation. Here's an example of the frame I used to evaluate the usage of Code Analysis (FX Cop) in some security usage scenarios:
Scenario Evaluation Matrix
Development life cycle
- Scenario: Dev lead integrates FX Cop in build process.
- Scenario: Dev lead integrates FX Cop in design process.
- Scenario: Developer uses FX Cop in their development process.
- Scenario: Tester integrates FX Cop in testing process.
- Scenario: Developer integrates FX Cop in deployment process.
- Scenario: Dev lead creates a new FX Cop rule to support custom policies.
- Scenario: Developer uses FX Cop to evaluate security of web applications.
- Scenario: Developer uses FX Cop to evaluate security of desktop applications.
- Scenario: Developer uses FX Cop to evaluate security of components.
- Scenario: Developer uses FX Cop to evaluate security of web services.
Input and Data Validation
- Scenario: Identify database input that is not validated
- Scenario: Input data is constrained and validated for type, length, format, and range.
- Scenario: Identify output sent to untrusted sources that are not encoded fields
- Scenario: Check secrets are not hard coded
- Scenario: Check plain text secrets are not stored in memory for extended periods of time
- Scenario: Check sensitive data is not serialized.
In this case, I organized the scenarios by life cycle, app type, and security categories. This makes a pretty simple table. Explicitly listing the scenarios out helps see where the solution fits in and where it does not, as well as identify opportunities. A key aspect for effective scenario evaluation is finding the right matrix of scenarios. For this exercise, some of the scenarios are focused on the user experience of using the tool, while others are focused on how well the tool addresses recommendations. What's not shown here is that I also list personas and priorities next to each scenario, which are also extremely helpful for scoping.
What becomes interesting is when I applied criteria to the scenarios above. For example:
- Recommended practice compliance
- Implementation complexity
- Quality of documentation/code
- Developer competence
- Time to implement
I then walked the scenarios, testing and evaluating against the criteria. This produced a nicely organized set of actionable feedback against how well the solution is working (or not). I think part of today's product development challenge isn't a lack of feedback, but rather a lack of actionable feedback that's organized and prioritized.
The beauty of this approach is that you can use this to evaluate your own solutions as well as others. If you're evaluating somebody else's solution, this actually helps quite a bit because you can avoid making it personal and argue the data.
The other beauty is that you can scale this approach along your product line. Create the frames that organize the tests and "outsource" the execution of the scenario evaluations to people you trust.
I've seen variations of this approach scale down to customer applications and scale up to full-blown platform evaluations for analysts. Personally, I've used it mostly for performance and security evaluations of various technologies and it helps me quickly find holes I might otherwise miss and it helps me communicate what I find.