A big change is happening in the engineering teams across Microsoft IT. We are combining developers and testers into one role we are calling Software Engineer. This new role owns coding and testing equally, and specifically feature design and coding, unit and functional testing, and system level testing like integration and performance testing. Traditionally, we followed the philosophy that you should never test your own code. It’s always better to have another set of eyes look at what you have developed. That’s not the case anymore. The new philosophy is to test your own code. Let me repeat, test your own code!
I get a lot of questions about this. 1). Former devs don’t know how to test as well as testers so won’t bugs slip through? 2). How will the software engineers know when they’ve tested enough? And my favorite, 3). How can I objectively look at my own creation (code) and find problems with it? I call this last one the “don't call my baby ugly” syndrome. Yep, we have some challenges in front of us as we become Software Engineers.
For the first question, many people talk about increasing automated regression suites or doing more code and test case reviews. These are all good ideas and should absolutely be done as developers and testers are learning each other’s role. But the concept that needs to be tied closely to the software engineer role is “flighting”. This also helps answer the second question which by the way has been a question plaguing testers for years, how do you know when you’re done testing? In the past, the answer was to over-test. Now, the answer is flighting and experimentation.
Two terms have come to be used often around Microsoft lately. Experimentation and flighting. Experimentation is when you take two slightly different versions of a new feature, release them out to a segment of the customer base, and see what version works better or makes the customer happier. Flight, although similar, is more about only having one version of your new feature and expanding the users who have access to it from your immediate sprint team, to beta users, to the world. In IT, we can for now use these interchangeably since we typically don’t have two versions of the same new feature. We may have a current feature already in production that the customer is using as well as a new version of that feature. So one could argue we are doing experimentation or we are doing flighting. In either case, the underlying principle is that the customers are testing your feature for you and who better to know how the feature will be used than the customer? So this works out well.
But in order to do this, you need to change a few things from the tradition approach of delivering software. You need to break down your code base into small components that can be deployed on their own and side-by-side with an older version. You need to break down your automation into components as well because you don’t want to change one component and then need to run a lengthy automated regression suite that tests even the unrelated components. Be careful here to do targeted testing; testing everything slows down the delivery system but testing too little adds risk that more problems will be uncovered by the customers in the experiment. Add the necessary telemetry so that you know what the component is doing when it goes live. Your service in production should have the ability to turn features or components on and off as well as direct segments of customers to that new component and away from it as needed.
So you are working as a software engineer on a new feature. You test it thoroughly applying the new skills you have learned about valid, invalid, and boundary cases. You get former testers to review your test cases and results. Now are you ready to go live? Well, you don’t have to answer that. Through experimentation you can get your component out to production and use customer usage data and monitoring to determine if it is working as expected. Your telemetry data will tell you. You are now doing integration testing without needing a separate environment or a separate set of test cases. Will you expose some bugs to the customers? Probably. But overall the bug exposure will be limited and if there are too many bugs, you can turn off your feature and redirect traffic back to the older version. At this point, do you roll back your changes? No. Only ever fix-forward, don’t roll back. Make changes to your code to fix the bugs you found during your experiment and then redeploy that component and try the experiment again. So you see, through using the concept of flighting and experimentation, its ok if there are bugs that slip through and it’s ok if you don’t know if you’ve done enough testing. You’ll learn the right amount of testing as well as how the customer is using your feature as you do more experiments and eventually your flights will go live with less problems to fix.
For those of you struggling with calling your baby (code) ugly, let’s instead ask the question, is your baby healthy? Everyone wants a healthy baby just like every engineer should want high quality code. Even if your baby seems healthy, you still take them to the doctor’s office for check-ups. Same here. You need to do check-ups of your code to make sure it is also healthy. It’s a mental shift you need to go through. Another way to approach it is to answer the question, what would my tester find wrong with this code?
If you work for a company, I have one final thought here to help you test your own code which is to realize that it’s actually not your baby. It belongs to the company you work for. You should take pride that your code is as high quality as possible, but it’s not yours to own.