Share via


Recommended presentation "The future of testing"

  I like to share a talk by James Whittaker, Software Architecture at Microsoft. The talk name is  "The Future of Testing", which was presented at  "2008 Annual Google Test Automation Conference", the link is  https://www.youtube.com/watch?v=Pug\_5Tl2UxQ.

  Here is some my thought after watching the video:
    * Fix bumpers in car v.s. It is feature in software. I found it is very interesting topic. There are many reasons we treat bug as feature.  One of the reason is that we, as software developers, don't use the software as our customers do, sometime, we don't feel the pain of using the software.  As a software test, we need to feel our customer's pain, and try our best to fix this issue.  In the past,  I always feel that our SQL Server 2008's Setup is annoying.  But I failed to raise this issue or assume it is not my task and someone will fix this.  As a result, our customer feel the pain of the setup issue. Another thing is that I did not participate the several setup bug bashes.  If I participated, I guess I will not open any usability bugs since I am afraid of the bugs will just be closed as "it is by design" or low priority.  Our test automation also contribute to this issue as well (it raise another issue, can or will UI testing should be automated) . In summary, I do see this is a critical issue, we will face this again if we can not find a way to solve it.
    *  Vista testing.  The presenter said that so many testers, internal customers and beta users tried our Vista before we ship, but we still face issues after released to public.  I think one of the reason is the same as the bumpers v.s. feature, people just want to ship the product, so sometime forget to make sure we want ship the best product to customer. Another thing I thought is that sometime when the product or the group is so big, it is very hard to do something right at the right time. So I think big is not always a good thing. Again, we need to ask by ourselves, did we help to testing our Vista, did we opened bugs for the product, why we didn't and what the result if we did?  BTW,  I also don't know how I open bugs assigned Windows 7.
    *  Vista testing again. The graph which shows the complexity of Vista's different model is very interesting? It is a simple and straightforward way to show the software we are testing. Another point is that if we are testing a software, full understand the software you are testing against is necessary, such as what is the input, what is output, how many possible condition, etc. From SQL testing point view,  it seems that we are mainly do White box testing by using T-SQL script.  However, even we use the same test script, we actually tests different components and sometime, we do use our internal knowledge to guide what we want to test. So isolating our testing to component level, and finding bugs in the component is important.
    * X-Box testing. I found it is really interesting. It visualized what we are testing, so that we can have a clear idea of what is missing in testing.  It looks like code coverage, but it has more than code coverage. It has real time feedback. It finds hot features from end-user.  It adjusts the code to help testing. In this case of SQL Server testing, TestQP has some similar idea. TestQP can self adjust its behavior according to its internal domain knowledge and feedback from the query result. For example, it can find the area/rules which have not been touched during a testing run, and adjust the generated query to cover the missing area.
    * Engine componenization. I start to love these improvements.  Thinking about what is the picture of SQL Server if we use the same way as we visualize Vista?  It will contain several major models, but tons of links between these models. Componenization will help us to isolate these models and remove the dependencies.  It also help our testing because we can also isolate our testing to one component at one time. The need for large number of end-to-end tests/integration tests will be decreased.
    * Fire all Heroes.  I am kind of sharing the same feeling with James.  We don't need Heroes, but just finish the project as we planed.  I have one time experience as a hero, and I feel the pain of such experience. My take away from the experience is that we should try my best to avoid this happen in future project, although we learned more from such project than the projects which finished on time perfectly. Why this thing happened,and why I am sure it will happen again? It is something we need to think about. I suspect the "big" thing may contribute to this. The process itself also contribute some, and we as testers and developers also contribute some.