Data-driven Visual C++ Editing Test Framework

Hello, I am Smile, a member of the QA team on Visual C++ Compiler Team. I would like to write about the methods we use to verify the intellisense results while editing in the Visual C++ IDE. To better understand this, consider the following common user scenario:

Andy opens his project in IDE to continue his work the day before. He opens Client.cpp, finds the line he finished at  yesterday, and starts coding. After adding a few lines of code, he wants to use a member of a struct and calls the MemberList intellisense operation (Ctrl + J on the keyboard) on the instance of that struct to check the full name of that member ….

A simple scenario, right? Yes, but our job is to test whether the MemberList operation would report accurate and complete results, not only in simple scenarios like this, but also in very complex scenarios that may involve thousands of lines of code and a variety of IDE operations, such as FindAndReplace, FindAllReference, Delete/Undelete, SwitchBetweenFiles, …, and so on.

To improve the productivity of writing tests and reduce their maintenance, you might think that it could be a good idea to write C++ code for each test. Unfortunately this solution is not good because each line of the test code itself might be buggy and therefore needs to be verified. Also some tests may be used for 20 years and modified from time to time. It might be painful for a QA to read and try to understand test code written 20 years ago, considering the fact that everyone has his own coding style and there might not be enough comments for each of the modifications on this test between 20 years ago and now.

Then how to do it?

The solution we use is called “Data-driven Visual C++ Editing Test Framework”. The idea is to abstract common IDE edit operation/operation sequences into APIs, and wraps those APIs into script statements. The script statements corresponding to one or multiple specific user scenarios are stored into a .xml file as xml data. To write a test, a QA won’t need to write even one line of C++ code. What he needs to do is to figure out the user action sequences and maps them to the pre-defined script statements. That’s all. The test framework could read those script statements from the .xml file, interpret them into corresponding APIs and execute them at runtime. Compared with writing C++ code for each test, this data-driven approach has the following advantages:

1.       relatively bug free – because all the error cases/exceptional handling are already done and verified in the implementation of those APIs.

2.       easy to maintain – to add/change/remove user editing operations just need to add/edit/remove a few lines of script statements. No C++ test code need to be written, and therefore no verification code need to be written to justify the test code.

3.       easier to understand – high-level abstracted script statements hide implementation details, and therefore much easier to be understood than low-level C++ code, especially after 20 years.

Ok, I guess the next question would be: what kind of script statements would we support?

I list some examples below:

·         OPEN_SOLUTION — Open a solution.

·         OPEN_FILE — Open a file.

·         FIND_TEXT — Search for particular text in currently opened file.

·         INS_AT_OFFSET — Insert some texts at the beginning of the line a few lines below the first appearance of an existing text.

·         MOVE_CURSOR_TO_OFFSET — Move cursor to a specified location relative to an existing text or current location of the cursor.

·         ADD_FI_INCLUDED – Change project configuration to add /FI option to the configuration of a VC project.

·         MEMBER_LIST — Verify MemberList IntelliSense operation results where the cursor locates.

With those script statements, a test on the simple scenario I introduced above can be as simple as following:

·         OPEN_SOLUTION@@TEST_ROOT@@TestCodeClientClient.sln   //Open solution

·         OPEN_FILE@@TEST_ROOT@@TestCodeClientClient.cpp   //Open Client.cpp

·         INS_AT_OFFSET@@//To be done@@1@@ int b = mySon.    //Add “int b = mySon.” 1 line below “//To be done”

·         MEMBER_LIST@@CONTAIN@@int Parent::g(int j)    //Verify whether memberlist contains “int Parent::g(int j)”

       (@@ is the separator)

This framework could still be improved in multiple ways, such as enriching the script statement library, introducing more complex language structures (e.g. if-else, while-do statements) and enhancing debugging mechanism. Among all the thoughts, one improvement might be most beneficial to our users as well as our product, which I called “motion-capture-based problem repro” feature. With this feature, whenever an user meets a problem in IDE, he no longer needs to write a long Email describing how the problem should be reproed. What he needs to do is to enable this feature and repeat his previous operations in the IDE (for example: move the cursor to a specific location -> edit some code -> trigger memberlist … -> problem appears). Each of those operations will be logged and used to generate a script as described above. So the problem report on the user side becomes fairly easy: enable feature -> repeat operations -> send out the generated scripts and related source codes. The problem repro on our side also becomes fairly easy: simply run the script, and then we can see the problem repro. Besides, those problem repro scripts can be categorized and selected as regression tests, which further help us improve our productivity and product quality.

Any thoughts/suggestions about our test framework, script language, and future work? Look forward to hear from you guys. J


Smile Wei

Visual C++ Compiler Team