I often find myself sharing examples of evaluation criteria in the prescriptive guidance space. Here are some examples of criteria from one of my favorite platform assessments:
- Best practice compliance. For a given analysis topic, to what degree did the platform permit implementation of best practices? Factors influencing best practice compliance include transparent integration (the default behavior enforces best practices automatically), user-assist features in the IDE, and degree of clarity of configuration.
- Implementation complexity. How difficult was it for the developer to implement the desired feature? Factors influencing implementation complexity include the ease of use of the feature (as implemented in a tool), amount and length of code (if any was needed) Quality of documentation and sample code . How appropriate was the documentation? If examples were supplied with the documentation, were they sufficient to illustrate the concept, and did they exhibit best practices?
- Developer competence. How skilled did the developer need to be in order to implement the security feature? To ensure an apples-to-apples comparison, prior knowledge of the tools platform is not assumed.
- Time to implement. How long did it take to implement the desired feature or behavior?
- The search cost. How long did it take for the user to find information on how to use the feature? Was the information shipped with the product or was it found externally on a vendor website or via search engine?
- The implementation time. How long did it take, after knowledge assimilation, to configure or code the platform to implement correct behaviors?