In order for testsourcing to take hold of the future of testing, two key technological barriers must be broken: the reusability of test artifacts and the accessibility of user environments. Let me explain:
Reusability: The reusability of software development artifacts, thanks to the popularization of OO and its derivative technologies in the 1990s, is a given. Much of the software we develop today is comprised of preexisting libraries cobbled together into a cohesive whole. Unfortunately, testing is not there yet. The idea that I can write a test case and simply pass it off to another tester for reuse is rare in practice. Test cases are too dependent on my test platform: they are specific to a single application under test; they depend on some tool that other testers don’t have; they require an automation harness, library, network config (and so forth) that cannot be easily replicated by a would-be re-user.
Environment: The sheer number of customer environments needed to perform comprehensive testing is daunting. Suppose I write an application intended to be run on a wide variety of mobile phones. Where do I get all these phones in order to test my application on them? How do I configure all these phones so they are representative of my intended customers’ phones? And the same thing goes for any other type of application. If I write a web app, how do I account for all the different OS, browsers, browser settings, plug-ins, registry configurations, security settings, machine-specific settings, and potentially conflicting application types?
The answer that is emerging for both of these needs is virtualization which is steadily becoming cheaper, faster and more powerful and is being applied to application domains that run the gamut from lab management to IT infrastructure deployment.
Virtualization has great potential to empower the ‘crowd’ for crowdsourcing. Specialized test suites, test harnesses, test tools can be one-clicked into virtual machines that can be used by anyone, anywhere. Just as software developers of today can reuse the code of their colleagues and forebears, so too will the testers in the crowd be able to reuse test suites and test tools. And just as that reuse has increased the range of applications that a given developer can reliably build, it will increase the types of applications that a tester can test. Virtualization enables the immediate reuse of complicated and sophisticated test infrastructures.
Conveniently, virtualization does the same favor for testers with respect to user environments. A user can simply one-click their entire computer into a virtual machine and make it available to testers via the cloud. If we can store all the videos in the world for instant viewing by anyone, anywhere then why can’t we do the same with virtual user environments? Virtualization technology is already there (in the case of PCs) or nearly there (in the case of mobile or other specialized environments). We simply need to apply it to the testing problem.
The end result will be the general availability of a wide variety of reusable, automated test harnesses and user environments that can be employed by any tester anywhere. This serves to empower the crowd for crowdsourcing, putting them on more than even footing with specialized outsourcers from a technology standpoint and since they far outnumber the outsourcers (at least in theory if not yet in practice) the advantage is clearly in favor of this new paradigm.
Market forces will also favor a crowdsourcing model powered by virtualization. User environments will have a cash value as crowd testers will covet them to gain a competitive advantage. Users will be incentivized to click that button to virtualize and share their environment (yes there are privacy implications to this model, but they are solvable). And since problematic environments will be even more valuable than those that work well, there will be an upside for users who experience intermittent driver and application errors: the test VMs they create will be more valuable … there’s gold in those lemons! Likewise, testers will be incentivized to share out testing assets and make them as reusable as possible. Market forces favor a future with reusable test artifacts and virtualization makes it possible.
So what does this virtualization-powered future mean to the individual tester? Well, fast forward 2-5 (or longer if you are skeptical) years in which time millions (?) of user environments will have been captured, cloned, stored and made available. I can envision open libraries of such environments that testers can browse for free or proprietary libraries available by subscription only. Test cases and test suites will enjoy the same treatment and will be licensed for fees commensurate with their value and applicability.
Perhaps, there will come a time when there are very few human testers at all, only a few niche and specialized products (or products of extreme complexity like operating systems) will actually require them. For the large majority of development, a single test designer can be hired to pick and choose from the massive number of available test virtual environments and execute them in parallel: millions of person-years of testing wrapped up in a matter of hours because all the automation and end-user configurations are available and ready to use. This is the world of testsourcing.
It’s the end of testing as we currently know it, but it is the beginning of a whole new set of interesting challenges and problems for the test community. Everything we currently know about testing applies to this new world, it’s just executed in a completely new fashion.
And it’s a viable future that doesn’t require more than virtualization technology that either already exists or is on the near term horizon. It also implies a higher-order effort by testers as we move into a design role (in the case of actually performing testing) or a development role (in the case of building and maintaining reusable test artifacts). No more late cycle heroics, testers are first class citizens in this virtualized future.