Can Developers Test?


Diligent Reader Ayaz asks:

Everywhere there is talk about *the tester mentality* and how the testers should refine their approach towards a problem. My question is what would you advise a *developer* so that he can test his code and catch the bugs himself instead of waiting for a test engineer to report it. What would *your* approach be if you were an SDE?

I know there are always time constraints pulling your legs as a developer but I still think that a fair amount of testing should be done at the development site rather than at the testing site. Testing and finding an issue at the developer’s end, for me, is far more effective and even time efficient (because there is no time loss in all the reporting and discussion).

My approach is that I come to work as a *Tester* and think that I am testing someone else’s code. But this does not work as well as it should because I always find a soft spot for myself while testing. Its not completely possible (at least for me) to drain all the information about the code and implementation details and start with a completely different state of mind.

I completely agree that developers should do as much testing as they possibly can. Bugs developers find before they check in are cheap to fix – no red tape or change board to deal with. Bugs developers find may be a symptom of a larger issue that will radically affect their design or implementation. Every bug a developer finds and fixes is a bug a tester doesn’t have to waste time finding and reporting. <g/>

I have not yet had the misfortune of working with a developer who refused to test their own code. I *have* worked with many developers who did not have the first clue how to test their own code. My Testing For Developers checklist is an attempt to remedy that.

If a developer wants to go even further into testing their own code, they can work to acquire the tester mentality. As Ayaz says, everyone talks about the tester mentality, and everyone has their own definition of it. I think of the distinction this way: developers tend to focus on “How can I make my code work?”, whereas testers tend to focus on “How can I find all of the places and scenarios where my developer’s code doesn’t work?”

These are two very different mindsets. Take Test Driven Design. This is a design and coding technique where the developer writes a test describing one tiny bit of functionality, runs the test to verify it fails, writes just enough code to make that test pass (whilst keeping all other tests still passing of course), and then repeats. This is all about “How can I make my code work?”

Contrast this with the techniques James Whittaker describes in his books, which are effectively checklists of ways to find common defects (i.e., places where your code does not work). Or with the questions I list in my Testing For Developers checklist, which aim to get you thinking about what you might have done wrong – that is, “How can I find all of the places and scenarios where my code doesn’t work?”

Switching back and forth between dev-think and tester-think is hard. Really hard. Very extremely hard. Essentially you are swapping out one mindset and swapping in a completely different one. Lots and lots of practice is typically required to gain even a modicum of facility in making this swap. Oooh – a challenge! What developer (or tester) can pass that up? <g/>

My final suggestion is to take the same approach as you would when learning any other skill: observe, talk with, and train with a master. Which is to say, watch your testers do their thing! Talk with them about how they approach their work, why they do the things they do. Pair test with them, which is useful and productive for all the same reasons pair programming is. Ask them to help you brainstorm test cases for your code. Any good tester will jump at the chance to help their developer test better!

How much testing should developers do? The answer is completely dependent on your context. In general, I feel that developers should do as much testing as they possibly can, freeing up my time from finding checklist bugs and allowing me to focus on cross-feature, integration-type bugs that tend to be harder to find and have nastier effects. You may answer differently. More important than any particular answer, though, is to simply have the discussion with your feature team!

Comments (8)

  1. Kevin Daly says:

    Developers have to do whatever testing they can…in my opinion that is *never* going to be enough however, because as developers we suffer from the singular disadvantage of knowing what we think the code is supposed to do and how it is supposed to work. This almost inevitably makes us blind to things that a normal user might try first off as a matter of course but which we would never think of. "But who would do that?" looms large over everything. It’s very hard to get over the psychological barrier of prior knowledge.

  2. AlfredTh says:

    I don’t understand needing testers for anything other than cross-feature integration-type bugs. I think that having too many testers (say more than one per 40-50 develoeprs) is going to cause more problems than solve. It makes developers lazy, careless and distroys any chance for getting reliable code in a reasonable time. You can’t test quality in. It has to be designed in and that means starting with good program design, developers who do it right the first time, and verify their own work.

    Admittidly it has been 25 years since I was an OS developer and operating systems have gotten a little more complicated but still I don’t see anything fundimental that makes writing good quality code harder now than it was back then.

  3. Lothar says:

    I’m developing server-based software primarily and I’m using JUnit for development all the time. In retrospect my way of programming has changed the last six years (when I started using JUnit) because as a developer who is writing the test-code himself the question I’m now not asking myself anymore is "what the code is supposed to do and how it is supposed to work" as stated by Kevin. Now I’m asking myself "what the code is supposed to do, how is it sopposed to work and how can I find out that this state is reached without testing it for myself all the time".

    To be able to test functionality in an automated way, the way of programming a functionality changes. You break up things into smaller pieces (methods, classes, …) automatically to reduce the number of combinations within one test, leading to a more clear source and more reusable code.

    But – as Kevin already stated – a testing developer is not eliminating the need of a tester. With a different perspective (as a user) and a different goal: See the applicaiton break and not see it work.

    Best regards, Lothar

  4. Ayaz says:

    Thanx Michael for a great post :)

    Well as I already said the need for testing at developer’s end can never be ignored. The important point is to define the threshold. If developers test too much they will lose the focus of development and if they test too little they will never make a stable product instead they will keep on fixing the tester reported issues.

    As for me I do the sanity testing and some regression testing at my end. Once I’m sure that I’ve implemented the required feature and its working properly for both positive and negative scenarios, I move on to the regression testing. Once I’m sure that there is no regression caused by my code, I submit it to the Quality Assurance team and my product goes through their testing cycle …..

    This way I mostly find out any problems related to my code before submitting. It saves me some QA cycles and a lot of time :)

  5. Anu says:

    Off topic: could you post an entry on when to resolve a bug as "Dupe" or "No repro". I’ve been following the rule of resolve a bug as dupe if and only if the repro steps of the 2 bugs are the same. If they are the same root cause, it totally depends on the context whether you can resolve them as dupe or not. Many times, I see devs mark 2 bugs as dupe because of a large root issue. Something like "nested classes not working fine in scenario A" causes 15 bugs on different nested classes scenarios to be resolved as "Dupe"!!! At that rate, we might as well have a giant bug called "Visual Studio not coded fully" and then resolve all bugs in the world as dupes of that!

    Scenario #2. Bugs resolved as not repro because it does not repro on the dev’s machine which has a build which is 10 days newer than the build that it was filed on. Errr – how is this not repro? The justification offered is, "I might have fixed this as part of a larger checkin for another bug fix…it is not reproing on a newer build now. So, no repro"!!! WTF?? How does that become a no repro?

    I think this still happens because there are no clear guidelines on how to resolve bugs. I was hoping that you would either point me to a doc that outlines this or make a post yourself. THanks for listening to the rant! :-)

  6. micahel says:

    Kevin: Yep, developers know how their code *should* work and so they know what it "*can’t*" do. Which of course if often exactly what it does do! <g/> This is one reason I find checklists helpful: they list specific things to look for and so do an end-run around knowing what code can/’t do.

    Anu: Thanks for the blog post entry! I’ll discuss resolving bugs soon-like.

  7. Sanat Sharma says:

    I agree that the developers can’t test the application with the mindset that a tester have. I have an attitude of "TEST TO BREAK". Having an experience of around 6 yrs in Software testing, what I can suggest is, that the developers should write the code carefully and perform the unit testing throughly. If possible, perfomr the Integration testing also. And most important, make a document of all those test cases that have been performed while Unit testing or Integration testing. Pass those documents to testing team. These documents will definitely help the testing team to perform the testing further.

    – Sanat Sharma