Let us talk about “Quality Bars”

When we talk about quality bars, we are not talking about the phenomenal gems such as the Kalahari Oasis in the Mabalingwe game reserve, which I visited and thoroughly enjoyed on more than a couple of hot days under the sun.

Instead we are starting a discussion on the quality bars we are using and considering for ALM Ranger projects, with the hope that you (ALM community) will give candid feedback and share your experience.
image

Why do we need quality bars?

We strive to ship quality solutions and experience which:

  1. Accelerate the adoption of Visual Studio with out-of-band solutions for feature gaps and value-add guidance
  2. Create raving and unblocked fans
  3. Meet regulatory and legal requirements

The first has been our mission from the start and the second will always be fuelling (inspiring) us and the community. The third is the one that has and will continue to introduce challenges and friction in the ALM community, because it requires bandwidth (which we have little of) and razor-sharp focus.

What are the challenges with quality bars?

Let’s start by looking at the quality bar slider diagram we created in an attempt to highlight the challenges when striving for the full compliance with industry guidelines, standards and a quality user experience. At this point we assume that the ALM Rangers are well known to you, as well as their geographically dispersed, virtual, part-time and volunteer based community.

We deliver a menu of solution types ranging such as guidance, supporting whitepapers and samples, quick response tooling samples, tools or a combination thereof. Raising the Quality Bar for Documentation gives us an overview of the documentation quality bar, while Raising the Quality Bar for Tooling is the starting point for the quality bar discussed herein. image 

image When we talk about tooling we can move the slider from none (pure guidance) to a fully supported “shrink-wrapped” product. The compliance and quality bar checks and balances take a huge jump when we reach and especially cross the “ship binaries” marker.
image

As mentioned the ALM Rangers community is based on volunteers, who invest their passion for technology, their real-world experience and their personal time on a part-time basis.

What motivates the ALM Rangers? Great question which probably will never have a finite answer. Here are a few replies from a recent survey in which we asked ALM Rangers why they do what they do:

  • Collaborate with passionate real-world experience
  • Ability to work on latest technologies and professionals on valuable projects
  • Credibility in the field and to share real world experience amongst fellow Rangers
  • To develop solutions for real world impediments and share them with the whole community outside the rangers
  • Work together with other high passionate people on subjects of my interest
  • Opportunity to create things valuable to the wider community, opportunity to influence the product
  • Contributing to others success and provide a value for the community around Visual Studio
  • Because I enjoy helping

The challenge is that the motivation factor starts declining as we move the quality bar  slider to the right as shown.

image As we move the quality bar slider to the right we get closer to the product group quality bars, but also start inheriting more and more “process tax”. Quality solutions, quality experience and in particular regulatory and legal requirements introduce stringent compliance testing and validation. We refer to this as the “process tax” which requires time, razor-sharp focus and which inadvertently chisels away at the motivation factor for volunteers, who want to build solutions in their personal time and not be bogged down with compliance red-tape. Our current strategy is for the Program Manager Product Owner and Dev Leads to shield the team from as much of the quality processes as possible, keeping the team focused on building and testing the bits and continuing to have “fun” with what motivates them.
image

We have broken down the quality bar into three main checklists:

  1. Code Quality checks, as outlined Raising the Quality Bar for Tooling and the Ruck Guides, are used when the solution includes any source code. Effort is estimated in minutes to hours.
  2. Code Review checks are introduced when the solution contains source code that is more than just sample source code and are typically used by the community to create a binary. Quick Response Sample Tooling are typically shared as source code only and the community creates and shared the binary builds. See https://aka.ms/treasure35 for examples. Effort is estimated on our backlogs in hours.
  3. Quality Essentials checks are used whenever a binary is included with the solution. Effort is estimated on our backlog in hours to days, making it impractical for quick response solutions.

What is the base quality bar we recommend?

Looking at the Ruck Guides we note the following suggested list:

The further you move the quality dial from left to right, the more quality and compliance checks you need to consider as part of your quality bar bucket.

  • Minimum
    • Sprint Objectives
    • Project principles
    • Code Reviews
    • Definition of Done
    • Automated Unit Testing
    • Manual Test Cases
  • When shipping binaries
    • Dogfooding (early adopters)
    • Telemetry
    • Quality Essentials (PoliCheck, ApiScan, Signing,  …)
    • License agreement(s)

I will let you dig into the Ruck Guides, Quality Bar, Definition of “DONE” and knowing when it is safe to sleep peacefully and Code Review and Build process for more details and examples of the ingredients such as “definition of done” and “code review”.

But, what about testing in the context of the ALM Rangers?

  • Automated Unit Testing … typically includes a mix of unit, system, integration and regression testing that is automated and performed as part of the build.
  • Manual Testing … typically includes a mix of manual user, exploratory and user acceptance testing to validate features and edge-cases, using Microsoft Test Manager.
  • Dog-fooding … typically includes a mix of manual, exploratory and on-the-job testing with no predefined tests. While we have dog-fooded the use of automated Coded UI testing, the preferred trend is to use humanoids who are passionate about the technology, who test edge cases we would never have considered and who are willing to give candid feedback.

Your candid feedback and food for thought would really be appreciated! Rangers, comments?