Falsely Responsible

Newbie testers often believe they can actually stop their product from shipping. Product teams often foster this belief by forcing their testers to sign off on their product before shipping it. Can you imagine the result if we decided to exercise our alleged power and elected to not sign off?

CEO: Why haven't we shipped yet?
Test Manager: Michael hasn't signed off on the build yet. He feels it needs more testing, that it still has bugs lurking in it. He says it will take him at least two more weeks before he will know whether he will be able to sign off, and when he will be able to do so.
CEO: I'm gonna sign Michael off!

There are so many things wrong with this picture!

  • Why are we testers supposed to sign off on the product when no one else has to do so?
  • Why does management pretend to give us testers authority we do not really have?
  • We do we testers let ourselves be put in this untenable position of false authority?
  • Why aren't we testers paid commensurately to our alleged power and responsibility?

The first I have never understood. The second, it seems to me, occurs when management cannot make a decision - at least not until someone else makes one for them to overrule! The third I find to be that we do not know better, or we do know better and are afraid to do anything about it. The fourth I believe ties back to the second: while management is happy to pretend to give us this authority, they aren't about to even pretend to give us commensurate remuneration, let alone actually give it to us!

If you are in this situation, you have at least the following options:

  • Exercise your purported rights and see what happens.
  • Refuse to exercise this responsibility and see what happens.
  • Join a different team, one that understands how to treat and use their testers.
  • Stay put and attempt to educate your management that making this type of decision is their job, and that your job is only to provide them with the information they need to make it.

If you are in this situation, what are you doing to get yourself out of it? If you ever have been in this situation, how do you get yourself out of it? Let me know: michael dot j dot hunter at microsoft dot com.

Comments (9)

  1. Zach Fisher says:

    I have chosen to stay put and educate. When education fails, I fall back on diplomacy and a term we like to call provisional sign-offs.

    These kind of sign-offs are rife with language tailored to alert those in management that the product they are releasing is riddled with known – and as yet unknown – issues. It provides no guarantees, but it does attempt to capture the parameters for my confidence, or lack thereof. For example:

    "Given the limited time and resources for the testing of X, I am provisionally signing off for release Y. During the course of the limited testing effort, the following issues were observed and unresolved <list follows>, In addition, there is sufficient evidence that additional as yet unknown issues exist. Discovery of these issues falls outside the allotted resource parameters and will not be investigated for this release."

    You can see where this goes.

    These sign-offs must be honest and true to pass my own "personal integrity test", even though I realize that that providing it is a concession in-and-of itself. Management gets their prescribed dose of bureaucracy, and I get to sleep well at night. Sometimes it is just easier to "fill out the form" than to change the establishment.

  2. I think it’s fine for testers to sign off a product (and developers too, for that matter).

    But this may not lead to a situation where testers are holding back the release of a product.

    Only the ProductOwner should decide whether or not to release a new product (with all its faults), because only he can know if the new product (with bugs) is still an improvement over the current situation (without the product).

  3. Steve says:

    As a non-tester, but someone similar to that CEO you mentioned, I’d say the primary reason for having signoff on releases is often to have a paper trail, to be compliant with some auditable standard. Nowadays whenever we go through a security or SOX audit, the auditor asks for evidence of full tracking of a change, from the request through QA to the launch, and QA approval is helpful with this.

    If a tester wants to influence a release for real (not just on paper), I would recommend they establish a reputation, and work it from a social angle. If a respected tester speaks to the dev lead or project lead quietly and indicates they think it’s a mess and not ready for release, they could have a real effect on the release. Particularly if they’ve been right in the past, and haven’t acted like the boy who cried wolf.

    Really, role power is never hugely significant on its own for anybody, from a tester up to the CEO. Any role power has to be amplified through reputation and social contacts.

  4. CrazyDave says:

    uh, just about every org inside of MS has this FALSE pretence.

  5. This happens all the time!

    On a smaller scale it’s, "hey, QA, is this bug a blocker?"

    On a larger scale it’s, "so are we ready to ship it?"

    "I dunno" is generally not an acceptable answer to either question.

    The way I look at it, this situation has arisen at every job I’ve ever had in test. I can leave and go somewhere else, but I’m guessing it’ll happen again. So I choose to stay and alter the circumstances so that I can give a reasonable answer to the question.


    – make the decision communal.

    – be prepared to discuss likelihood and workarounds.

    – educate yourself about the non-technical factors in the decision.

    – don’t let it go beyond the test manager into the test team.

    – go around the formal channels to get the real work done so you don’t put people in a corner publicly.

  6. Joseph Kubik says:

    You’re spot on.

    The only caution I’d add, is that when people as the test team, "can we ship it" they’re not really giving the team sign off, what they want to know is:

    "What don’t I know?"

    "What do you know that is broken?"

    "What do you know that works?"

    But, that’s a lot of questions to ask when "do you think we can ship it?" is so short and sweet.


  7. initcontact@grahamwideman.com says:

    Hi Michael,

    With all the thought that has gone into testing process, I guess I’m (maybe naively) puzzled why this aspect of testing feedback hasn’t progressed to a more practical state-of-play?  Why the impracticalities that you point out, and why all the drama?

    Ie: Why is it a sign-off ritual at all, when what’s needed is a report-current-status juncture? Similar to what Zach suggests, but less defensive in tone:

    — Formerly open issues resolved:  A, B, C

    — Issues still open, D, E, F with these likely impacts on user: blah blah

    — These areas of functionality not assessed: G, H, I, with possibility of impacts blah blah.

    This is more or less what Joseph K suggests is being asked.

    It seems like the key point here is for the info and data on risks/costs of shipping to make their way to the person(s) who also has at hand info on the business benefits of shipping, and thus the capacity to combine these into a coherent risk/cost/benefit assessment.

    It doesn’t make sense for the actual *authority* to ship/no-ship to rest below the level where that assessment can be made and where responsibiliy can be placed weight all factors in the final decision.  

    By contrast, placing any *responsibility* (ship/no-ship or anything else) with someone without giving them the complementary *authority* is a dysfunctional situation more or less by definition.

    Anyhow, it sounds like this is *not* what’s practiced, and if not why not?

    — Graham

  8. Thanks everybody for your comments and suggestions regarding how to handle this situation! Also thanks, Steve, for pointing out that signoff can be a regulatory requirement – something I had not considered.

    gwideman: I also am puzzled as to why this is still the state-of-the-art for many product teams. That puzzlement, in fact, is one reason I wrote this post! Even when signoff is required for regulatory reasons, it seems to me that the signoff should be more "Yes, I have tested this and this and this in such-and-such a way", as you and others suggest, not "I affirm this product is ready to ship!"

    Query to my readers: do any of you work/have any of you worked for a team which did not have this mentality, or which moved away from it? If so, what was different that this mentality was not in place/was moved away from? How did the lack of this mentality change your team culture?

  9. Mark Irvine says:

    Hi Micheal,

    I think you are absolutely right, the key to this whole dilema is to shine a spotlight on the false authority.

    I recently started in a new company (about 6 months ago) as the only tester in a group of 8 developers. Part of my role is to ‘establish a QA function’ in this site (teams at other sites generally have a number of QA people).

    Early on I was presented with a document describing the Software Development Life Cycle process in this company. And sure enough, right at the end, QA sign’s off on the release. I was asked to review the document and make comments, so I raised this very same issue. As you can imagine, the first reaction from the Development managemer was that they needed sign-off for audit compliance. I tried explaining what I felt was the role of testers, and the development manager explained what she though was the role of testers. We were actually very much in agreement on most things. The language in the SDLC document was, however, a bit unclear and did not reflect anyones views. But she still was not convinced that testers don’t sign-off on the product.

    So I asked her some question:

    Q: If I find lots of bugs, and they are not fixed, would you still expect me to sign-off?

    A: Hmm, well, no, probably not.

    Q: Would my refusal to sign-off actually prevent a product getting released?

    A: Hmm, well, no, probably not.

    We talkled some more, she saw why I was objecting, and we worked out as follows.

    At the end of the test cycle I provide a report listing all the issues open, closed, etc. as well as test coverage, risks and what we did to cover those risks, and anything I didn’t get time to finish in the time I had. Then I tick the box for QA indicating that I have completed the work I planned to do, and attach the test report.

    After that we meet with the senior developer, talk about the findings and the issues, and she makes a choice about what to fix now, what to fix later, and if the product is to be released. If she decides not to fix something, I have every right to argue or question that decision, and to provide more information if necessary, but in the end, the choice is hers. She is, after all, the manager responsible for the product.

    It has worked well so far. Sometimes bugs are not fixed and they go into the ‘known issues’ in the documentation. Sometimes the product goes back and is reworked. And sometimes it is released as-is even with the bugs because there is some compelling business reason to release it. But it is an informed decision where my QA report forms one part of that decision process.

    – Mark

Skip to main content