My name is Suma Sushilendra and I have been a tester in C# IDE team since August ’02. I have owned testing C# IDE features for Visual Studio 2003, Visual Studio 2005 and now for Visual Studio 2008, also known as “Orcas”. Over these products and the last 5 years, I have owned testing almost all the IDE features except refactoring and debugging features.
There is so much that can be spoken of testing and here I am setting stage for a burning question that has continued to amuse me for the past 5 years of life as a tester, at every cycle during a product development. What I present here is entirely how I feel and deal with it.
When I began my career as a tester, there were a few basic rules of the road that I became familiar with and got to learn a set of processes and methodologies to follow along with some tools in order to make progress in the testing tasks I was made the owner of. As time grew, the tasks became responsibilities and owning testing of a feature became really being accountable for its quality.
Just for the sake of simplicity, let’s define (re-define) a few terms for making the point that this blog post is trying to.
Product – A software application with a bunch of old and new features aimed at aiding, enhancing and easing targeted activity/activities of its customers.
Feature – A small part of a product whose purpose of existence in the product is to help in a focused activity that the customer of the product is known/expected to indulge in often.
Project – A group of management people and processes that work around the people who do the real work, driving the timelines for releasing the product, controlling the logistics of how the product is developed (in stages), making sure that the list of features therein are in alignment with the customer expectations etc. Basically this is a necessary evil to every product’s release.
Customer – User of the product, of course!
One of the very first questions that the project management has to come up, even before deciding what the new product is actually going to be, is – when is it going to be released for public consumption? This boils down to asking every faculty of product making – when are you going to be done? And the fun begins when this question is popped. Broadly looking at the various faculties that are involved in the actual product building (I am leaving out the aspects of research that goes into deciding what all features to embed in a product, what are the customer expectations, what are the competitor products in the same line etc for the purpose of keeping this discussion concise and focused), I can say there are
o who understand how the features are going to be used by the customers
o who are expected to be more in touch with the customers’ and the competitor’s world
o and thereby hold a say in deciding if a feature needs to exist in a product
o who are the builders of the features/products
o who also have a good understanding of customers
o who make sure the feature/product quality meets the expectations of customers buying the product
o who also have a good understanding of customers and their needs and pain points
o who get to use the product soon after it is built and hence, get to experience the joy as well as pain of having to do so
That said, let me come back to the title of the topic – it is very important to note that the above list of faculty almost operates in that chronological order too. So we, the testers do happen to find ourselves at the last step of the product development process. So while this “when are you going to be done?” is an important question to answer for all the above faculties, it is more so for testers because we seem to be that last stop in the journey of the product that takes it out of the company’s doors and into the hands of eagerly waiting customers. Needless to say that it is at this moment that takes the product to its intended destination that the whole team’s efforts seem all more worthwhile. In that respect, when we testers, sit between the product and the customers, it is almost like saying that when testers get done, the product gets done!
And there can’t be more truth to it! So how do we say we are done? Simple, if you happen to say “yes” to the questions below, you are probably done testing that feature!
- Is the feature code frozen? (Meaning no more change will be accepted)
- Is the test plan complete?
- Is the “planned” testing (automated, manual, end-to-end or whatever) done executing?
- Have we stopped finding serious bugs in the mainline customer scenarios as well as some complicated use cases, for quite a while now?
- Have all of our tests been consistently passing of late?
- Do we feel that the feature meets the customer expectation?
If you notice, I said “you are probably done testing that feature!”. Then when will you be really done? Ok, let me take you through a small dramatized version of this conversation that happens between me and the team. Lines that are italicized are my mental conversations, just so you know!
My Manager : So, are you done testing this feature now? (Note that I said “yes” to all the questions above)
Myself : Yes, I think so ( Whatever I said after “yes” drives my manager nuts sometimes)
My Manager : Are you sure?
Myself : (…well, I am, only until my customer finds a bad bug that I missed!) Yes, I did do everything that I had planned for testing this feature and it does look like I am done, as of now
At this point, my manager usually gets what I meant without me further having to explain.
Our mission is accomplished when we ship our product. I do feel that we testers are an important and necessary “obstacle” before we accomplish our mission. Although we fall short of a customer just for the fact that we don’t spend our hard earned money to buy the product like our customers do, if you can believe me, we feel worse than having wasted the money on a low quality product when we see a customer run into a bad issue that makes his/her work stop/go waste. More so, because we had the chance to go through that pain ourselves thereby not have our customers discover it. We do know like everyone in software making, that no software is absolutely bug free but all we try to make sure is that we know all or most of its bugs before we ship.
It has been quite a fun filled roller coaster ride seeing some features getting rave reviews and some features becoming real pain for customers for me over the last 2 products that I have been involved shipping in. It is always a mixed bag of complaints and accolades when it comes to how our features are seen and used by our customers. In my next blog, I plan on exploring more into the details to explain the reason behind this we-will-never-get-done attitude of ours for all the testing we have to do.