Modern software developers bear little resemblance to our forebears. We’ve forsaken their jackets and ties in favor of hoodies and t-shirts. We’ve quit their offices and cubicles to occupy hacker hostels and corner cafés. They had floppies and sneakernet. We have Github. They printed and stored; we share and post. They worked for big companies with distribution channels. The world is our distribution channel. Where, with all these changes, do we stand with software testing?
Let’s face it, the 1990s were the golden age of software testing. As an industry we were still figuring things out. Global or local data? File and variable naming conventions. Time constraints versus memory utilization. Library, procedure or inline code? Use or reuse? And the granddaddy of them all: how do we fix bugs that occur in the field when the only way to get bug reports is by phone or email and the only way to update our software is by mailing a new set of floppies? We were at once not very experienced in writing code and after shipping that code, fixing it was a really, really painful proposition.
No wonder we put so much time and effort into testing. We had no choice but to double check developers’ work and try to ensure that as few bugs as possible made it into the released product. Like I said, it was a golden age for software testers. Small chance of getting it right, large chance of expensive rework. Testers were the insurance policy no company could afford to decline.
But then the world changed. First it was the web which made software updates a small matter of refreshing a web page. All those floppies were F5-ed into oblivion. And then along came mobile apps which could collect their own user telemetry, create their own failure reports and prompt a user to update them when necessary. At the same time the risk of shipped defects was decreasing dramatically. So-called waterfall software development models were replaced with agile methods that created better code out-of-the-box. A collective intelligence around how to code and the body of knowledge of coding practices matured. The art of coding has become downright pedestrian.
Quality is no less important, of course, but achieving it requires a different focus than in the past. Hiring a bunch of testers will ensure that you need those testers. Testers are a crutch. A self-fulfilling prophesy. The more you hire the more you will need. Testing is much less an actual role now as it is an activity that has blended into the other activities developers perform every day. You can continue to test like its 1999, but why would you?
You can’t test in quality, but you can code it in.
And at the tail end of the lifecycle, testing can now involve users at a level that it never could in the past. Who, after all, is the better judge of a bug: the user who is honestly trying to use the software to get work (or pleasure) done or a tester who has a preconceived (and unavoidably biased) notion of how the software is supposed to work? Why must a tester serve as the intermediary between the developer and the user when the user is only a click away? Can you imagine the impact on quality when developers and users have no middleman getting in their way?
Quality, and therefore testing, is not something separate from software development unless your software is going into a nuclear power plant, medical device or an airplane where it is difficult (for now) to recall post-deployment. For the vast majority of app development on this planet, software testing is an activity within the development process and keeps happening after the software is released. Modern testing is an activity and doesn’t require a separate role to perform it. It’s time to bring quality into the 21st century where testing is such an integral part of software development that you’ll often forget you are doing because it has become so familiar. Hey wouldn’t that be awesome … that testing gets done without making such a big fuss about it?
This is not your father’s application development process. It’s yours. Own it.