Assessing Tester Performance

Using context-free software product measures as personal performance indicators (KPI) is about as silly as pet rocks!

Periodically a discussion of assessing tester performance surfaces on various discussion groups. Some people offer advice such as counting bugs (or some derivation thereof), number of tests written in x amount of time, number of tests executed, % of automated tests compared to manual tests, and (my one of my least favorite measures of individual performance) % of code coverage.

The problem with all these measures is they lack context, and tend to ignore dependent variables. It is also highly likely that an astute tester can easily game the system and potentially cause detrimental problems. For example, if my manager considered one measure my performance on the number of bugs found per week, I would ask how many I had to find per week to satisfy the 'expected' criteria. Then each week I would report 2 or 3 more bugs than the 'expected' or 'average' number (in order to 'exceed' expectations), and any additional bugs I found that week, I would sit on and hold in case I was below my quota the following week. Of course, this means that bug reports are being artificially delayed which may negatively impact the overall product schedule.

The issue at hand is this bizarre desire by some simple-minded people who want an easy solution to a difficult problem. But, there is no simple formula for measuring the performance of an individual. Individual performance assessments are often somewhat subjective, and influenced by external factors identified through Human Performance Technology (HPT) research such as motivation, tools, inherent ability, processes, and even the physical environment.

A common problem I often see is unrealistic goals such as "Find the majority of bugs in my feature area." (How do we know what the majority is? What if the majority doesn't include the most important issues? etc.) Another problem I commonly see is for individuals to over-promise and under-deliver relative to their capabilities. I also see managers who dictate the same identical set of performance goals to all individuals. While there may be a few common goals, as a manager I would want to tap into the potential strengths of each individual on my team. I also have different expectations and levels of contributions from individuals depending on where they are in their career, and also based on their career aspirations.

So, as testers we must learn to establish SMART goals with our managers that include:

  • goals that align with my manager's goals

  • goals that align with the immediate goals of the product team or company

  • and stretch goals that illustrate continued growth and personal improvement relative to the team, group, or company goals

(This last one may be controversial; however, we shouldn't be surprised to know individual performance is never constant in relation to your peer group. )

But, (fair or not) for a variety of reasons most software companies do (at least periodically) evaluate their employee performance in some manner, the key to success is in HPT and agreeing on SMARTer goals upfront.

Comments (9)

  1. Hello BJ,

    So, what should be the tester’s metrics  for you?

    And some questions

    -is automated testing a gray or black or white technique for you?

    -is model based testing a gray or black or white technique for you?

    Thanks a lot,

    Javier Andrés Cáceres Alvis

  2. I.M.Testy says:

    Hi Javier,

    I don’t have a cookbook of tester metrics. Each project is different, and the specific expectations may differ among individual testers. At Microsoft we have guidelines on expectations at the different career stages, but it is up to managers and their direct reports to discuss specific, measureable, achieveable, realistic and timely goals.

    Chapter 2 in "How We Test Software At Microsoft" discusses some of the expectations in more detail.

    I don’t consider gray, black or white box as techniques. To me the phrases black-box, white-box, and gray-box are merely conceptual abstractions of how we design test cases, and are independent of whether that test case is automated or not.

    So, I may design some automated test cases using a white box design approach where I base the design of my test case on the coded implementation. I may also design some automated test cases using a gray box test design approach, and I may design others from a black-box approach. Basically, the conceptual approach I use to design and develop an autoamted test case depends heavily on the purpose of that specific test case.

    BTW…bandeja paisa sounds delicious and I would love to try it someday!

  3. Shrini says:

    >>>To me the phrases black-box, white-box, and gray-box are merely conceptual abstractions of how we design test cases, and are independent of whether that test case is automated or not.

    Well said BJ … Understanding that these are NOT techniques themselves (Technique = recipe)will immensely impact the way people use them. As it happens with abstractions (simplified version of a complex thing) – you can get better insight into the software but you "should not" confuse abstractions to techniques.


  4. I.M.Testy says:

    Hi Shrini,

    You are mistaken when you equate a technique to a recipe. A technique is a systematic process generally based on one or more heuristic patterns and/or fault models to help solve a particular problem.

    You’re right, a person should not confuse abstation with a technique. However, learning to create abstract models of software is extremely useful in the application of various testing techniques.

    For example, state transition testing is a well known functional testing technique that requires the tester to model important states and traversals between those states. Other ‘techniques’ also require in-depth analysis and abstraction in their application.

    Those who think techniques are as simple as following a recipe are doing it wrong!

  5. rikard_edgren says:

    Hi BJ

    I fully agree with the drawbacks you have listed with measures for performance, but I don’t like your reference to SMART goals:

    Specific – we become narrow-minded

    Measurable – can’t use fluffy, good things

    Attainable – no holistic stuff

    Realistic – can’t aim really high

    Timely – gives short-term thinking

    Sometimes it seems like the acronym itself is more important than the content.

    Personally I prefer goals that are Vague, Ongoing, Motivating, Important, Trustworthy



  6. I.M.Testy says:

    Hi Rikard,

    Clever man!

    Unfortunately, I would argue your assessment of SMART goals are short-sighted and one-dimensional.

    Specific – helps us to define a clear expectations between us and others (mainly our managers) of how we intend to demonstrate added value (importance) to the project/team/organization.

    Measurable – helps us define what success looks like. Just achieving our goal is great, but exceeding a goal is usually demonstrated with those fluffy, good things you mention…we call those "scooby snacks!"

    Attainable – goals that are achievable are much more psychologically motivating as compared to goals that are clearly not attainable. There are different types of goals. Some are independent and some are holistic. Also, we encourage people to set variety of goals that span a spectrum from easily acheivable to ‘stretch’ goals that are intended to self-motivate a person to learn new things and expand beyond their current capabilities. It is good to have a mix of goals.

    Realistic – the goals we set are only limited by our ambition and our capacity to achieve those goals

    Timely – I have 6 month goals, 1 year goals, and 3 year goals. The 3 year goals are reevaluated every 6 months. We encourage new testers to think in terms of both short term and long term goals, and have a healthy mix of both. Newer testers will generally have more short term goals, but should have at least 1 long term, stretch goal. More senior testers may have several short-term goals and usually 2 or more long-term, stretch goals.

    There are different types of goals. Some are narrow in scope with a single concrete deliverable. Once that goal is complete we move on to different goals. But, some goals are on-going (constantly improving); however, any successful manaager needs a way to track progress towards that goal regardless of whether it is a short-term goal or a long-term goal, and SMART goals help both us and our managers know when we hit specific milestones (trustworthiness).

  7. rikard_edgren says:

    Thanks BJ,

    for very good clarifications.

    Goals (as everything else) surely are multi-dimensional.

    I especially like that you have mix of goals, from several angles.

    What you haven’t mentioned yet is if you also consider the people the goals are for, in the sense that some people don’t like (plus don’t need) goals at all, or that some people want very clear goals, but would benefit from no goals at all for a while, in order to find their intrinsic motivation.


  8. I.M.Testy says:

    Hi Rikard,

    I absolutely agree that there are rarely one size fits all goals for everyone. There may be some general team-wide or project-wide goals that generally apply to people in the group, but those should be a minor part of individual goals.

    Often times new employees need a little direction establishing goals for personal improvement and subsequent evaluation against their peer group. In those cases, managers may offer some common goals, but they should also encourage the employee to draft at least one personal stretch goal.

    Unfortunately, I have seen some managers use a check-list type approach to goals for all of the people who report to them. Any moron can produce a list of ill-conceived goals or objectives.

    Ultimately, goal setting is a personal endeavor, and is essentially a contract between you and your manager. Both your manager and you must agree those goals are appropriate, and your manager should provide guidance on whether those goals are the appropriate goals for your personal growth.

    Yes, I know that some people don’t want very clear goals. Generally, more senior the person the fuzzier the goals. However, even senior level people can benefit from clear individual goals because they can help your manager more objectively evaluate your individual performance in relation to your peer group and provide more objectivity in performance evaluations.

Skip to main content