Add Round Trip Testing to your toolset



Add Round Trip Testing to your toolset


Test automation authors have to balance a lot of factors to deliver good automation. You want to create high value tests and deliver them early in the process. Ideally you have a short feedback loop with developers. One of my favorite tricks for closing the feedback loop is round trip testing.


Round trip testing is simply letting the product handle its own complexity. You concentrate on writing simple tests that use two or more calls into the product.


A simple calculator example


Addition is a simple way to explain. Here is some sample code to show how round trip tests work in practice. Imagine a calculator class with a static “DoOperation” method that takes two doubles and an enum indicating the operation to run. Here is a test class with 3 round trips. (In practice it is  a good idea to create three tests, but I compact them here for brevity).


Code


       [TestMethod()]


        public void DoOperationRoundTripTest()


        {


 


            double a = getRandomDouble();


            double b = getRandomDouble();


            double accuracy = 0.00000001D;


 


            double answer = Calculator.DoOperation(a, b, MathOperation.add);


            Console.WriteLine("{0} {1} {2} = {3}", a, MathOperation.add, b, answer);


           


            double roundTrip1 = Calculator.DoOperation(answer, a, MathOperation.subtract);


            Console.WriteLine("{0} {1} {2} = {3}", answer, MathOperation.subtract, b, roundTrip1);


           


            Assert.AreEqual(roundTrip1, b, accuracy);


 


            double roundTrip2 = Calculator.DoOperation(answer, b, MathOperation.subtract);


            Console.WriteLine("{0} {1} {2} = {3}", answer, MathOperation.subtract, b, roundTrip2);


           


            Assert.AreEqual(roundTrip2, a, accuracy);


 


            double roundtTrip3 = Calculator.DoOperation(roundTrip2, a, MathOperation.subtract);


            Console.WriteLine("{0} {1} {2} = {3}", roundTrip2, MathOperation.subtract, a, roundtTrip3);


 


            Assert.AreEqual(roundtTrip3, 0, accuracy);           


 


        }


Analysis


There are three round trips in the code. The first leg of each trip is adding two numbers. The first round trip is subtracting one of the numbers from the answer. You expect the result to be the other number you added due to the commutative property of addition. The second round trip is the same as the first one, but with the numbers in the other order. The last round trip is longer. Both numbers are subtracted in sequence from the answer and you expect to end up at 0.


Notice that the data are not hard coded. We call the random function to generate random data. You get a little extra free coverage by selecting random data. You might find some boundary bugs for conditions you didn’t realize existed. The second advantage to random data is that you don’t have to do anything complicated to create and store the data. You just make something up on the fly. Be sure to log the data and answers you are getting. When the random data fails you can reproduce the problem from the logs. If you can’t generate some kind of random data you may not really have a round trip.


The next thing to notice is that the verifications depend completely on the application to do the “heavy lifting”. This test code doesn’t need to know how to add or subtract numbers. It only needs to know the round trip should work.  Addition may seem trivial but this technique works with lots product code. If you look closely at most applications you can send data on a round trip and verify it comes back correctly.


Lastly, due to overflows and rounding errors, the static method getRandomDouble() method needs some thought. Otherwise you will overflow the calculator when you aren’t planning too. Also due to computer rounding an accuracy threshold is set.


Other simple round trips


One very simple round trip is to just add a lot of data to an API and then count how many elements are in the store. One developer told me that he ran that test on a library he was dependant on and found that the library was dropping every 42nd data entry. Testers had run five to ten inserts at a time and didn’t find the bug or if they did, they couldn’t reproduce it and it didn’t get fixed. A simple round trip test spotted the error and proved which module it belonged to.


For another example imagine a program that has a 3D globe. You click the screen and the program converts the location on the globe to latitude and longitude. The program then looks up the country or ocean at that coordinate. Testing this program can be tricky because of the translation of mouse clicks to latitude and longitude and then another lookup to the countries and oceans. There are two sets of round trips for this program.


The first one is the mouse click to lat-lon. This program had a function to put a marker at any coordinates (lat-lon) on the globe. The marker (for the tests) was converted to a small dot that way a special color not used to color the globe. The mouse test was like this:


1.    Create a random latitude and longitude.


2.    Put a marker on that spot.


3.    Dump the 3D scene to a BMP. Check the BMP for the special color. (If you don’t find it, spin the globe a random amount and try again till you do).


4.    Call the convertMouseClickToLatLon(x,y) function and verify it matches the random coordinates generated within tolerance.


The second test is to make sure we can ID the countries and oceans correctly.


1.    Pick a random country or ocean.


2.    Call the getRandomSpotInCountry method (the product had this built in for various reasons).


3.    Call getCountryOrOceanNameFromLatLon(lat, lon) function and make sure the country matches.


In the actual program the mouse test passed 10,000 tests in just a few seconds. However the second test was failing about 20% of the time. The developer was able to quickly find the bug when I gave him the logs with the coordinates that didn’t match.


After some hand spot checking we were confident this part of the program was working well. One of the big benefits was the simplicity. The test code didn’t have to know a thing about converting mouse clicks to polar coordinates and then converting those to latitude and longitude. It was a lot like the calculator example. The code just cooked up some very simple random data and put it through a round trip in the system.


Fuzzier round trips


The above examples show pretty cut and dried cases of round trip testing. What about cases where an exact match is pretty much impossible for the test code to figure out without re-implementing the product code?


Say software that deals with postal addresses. For example http://www.usps.gov has a lot of address functions. You can search for zip codes by address, by city and search for cities by zip code. Getting random addresses from zip codes isn’t something the site offers (if you were the tester for this web site, you might ask for a test hook that does to help with testing.)


Still there are some tricks we can use to write simple, fast and diagnostic test automation for this API.


Reversing the round trip


Reversing the round trip means taking data out of the application and transforming it in test code. Then you give the data back to the application and expect it to handle it in some predictable way. The key to a reverse round trip is that you typically only make one call to the product. Not two or more.


You can put in an address on the web site and it will canonicalize the address into a standard form with the zip+four.


One approach that works here is to take an address that is already canonical and degrade it. For example we have a canonical address of


WHITE HOUSE


1600 PENNSYLVANIA AVE NW


WASHINGTON DC 


20500-0003


 


There are several transformations we can do to it, and put it back in the system. For example, remove street descriptor words (AVE, NW) and feed it back into the system. Make sure you get the canonical address back, or an appropriate error. If your address engine supports spell checking (USPS doesn’t) you can randomly mutate the words and see if the system can guess what you mean most of time. Because we are doing a fuzzy match, we might not care if the spelling engine isn’t 100% robust for randomly mutated characters. We could create an advanced heuristic for mutations, but that sounds like complex and bug prone software.


In this example we have reversed the round trip since we started with known data from the system and then changed it in some way. It’s not simple as the calculator example. You have to use your imagination for ways transform the data and get useful, automatically verifiable results back from the system. In this case we expect the system to fix up the addresses and give them back exactly like what we started with.


Looking for other trips


Besides making addresses into canonical versions the site also offers the ability to get a list of zip codes in a city and a list of cities in a zip code. There are a finite number of US zip codes and we can discover the entire list with a little research. Once you have the list your test function could provide the zip codes in a loop and get the city or cities as a result. Then it would feed those cities back into the reverse function and verify it matches our input.


Other trips are less direct. There are 25 zip codes listed for Seattle, Washington today. You could record that number and check that it stays constant across builds of the software.  You could also put the system under load and see if the number of results is always 25. Truncated data is certainly a possibility in a web service that’s strained. In fact these types of tests are really good ones to run while doing load testing. They are simple and fast to run so you can do a lot of them and use them to drive some of the load.


C.R.U.D.dy trips


Many data API’s follow the CRUD (Create Retrieve Update and Delete) model. These APIs follow some typical patterns that make round trip testing simple and effective.


·         Create: create a new data item (create functions usually return an ID), query the ID and make sure your randomly generated data matches.


·         Retrieve: Think about the various things you can retrieve. For example many API’s allow you to search for records and count the results. One way to test this is to create a GUID, input some random number of records including the GUID and they query for your GUID. Verify that the count matches.


·         Update: Pick a random record, put some random data into the field(s) and then retrieve the record on every possible key and make sure the changes are sticky.


·         Delete: Create a random number of records with some unique tag like a GUID. Delete one. Count how many are left and verify it’s N-1.


Notice that the sample tests above don’t care about the data in any specific way. You are just looking for round trips the product will allow you to exploit and verify. These kinds of tests find a lot of bugs the first time they are run and are valuable to run anytime you make changes to the product. You can also write them quickly and trust them when report failures.


With a little creativity you can create an automated suite of tests with tens of thousands of individual tests that run in a very short amount of time and utilize round trips to keep the tests very simple.


Dealing with complex product logic


Some round trips are so complex you can’t test the results simply. For example Excel has the ability to open CSV files and then save them as XML files. You can write a complex program that understands Excels XML format and can do 1:1 comparisons of the data. If you are testing this feature in Excel you probably should write that complex automation at some point. However, your developer will be more productive if you can deliver some fast, simple and effective automation beforehand.


Simple comparisons don’t always work


Here is the problem. You can’t just do the full round trip and simply diff the results. There are a lot of valid ways to write a CSV file. A couple of solutions come to mind. The simplest one you can deliver today is to just count the rows and columns. Sure, you might miss subtle bugs, but your developer will be able to run that test quickly every time they make code changes and know they didn’t create an off by one error in the transformations.


You could parse the CSV files item by item and compare them using the same rules Excel uses. That’s a reasonable approach, but you can do even better.


Don’t give up on making the product “do your work”


Another approach is to start with a CSV file (you might make it randomly) and have Excel save it as XML. Then load it back into Excel and save it as a new CSV file.


Now load both CSV files back into fresh instances of Excel and use the document object model to compare all the values in each cell. Your test code doesn’t need to know a thing about CSV files (except the part that makes random CSV files for the tests) and you can easily compare the round trips. This code is very easy to tweak to check all the round trips to every format Excel supports. Best of all, you delivered a lot of effective BVT automation without code having to understand any details of the file formats. Now your developer has an effective tool for smoke testing file conversions and you have time to delve into the arcana of each format one at a time.


When you can’t get there from here


You will run into situations where you just can’t create reasonable round trips. You will find that you put data in point A and can’t see it until it’s been seriously mangled and comes out way downstream at point B. My name for this type of software is “The soup factory”. The soup factory takes vegetables in one side and a pureed blend of steaming hot soup comes out the other. It’s great at lunch time, but you can’t reverse the transformation. A lot of information is lost in the blender.


The soup factory problem is that the end result isn’t reversible into the inputs. You put 10 carrots into the factory. How can you tell from the bowl of soup that the carrot content is within spec? You can (with a spectral analyzer and a PhD), but doing so is just not cost effective. The best you can quickly do is taste the soup and see if it’s any good. Sadly your feedback will be vague and difficult to take action on. Worse it won’t be very diagnostic.


A good approach here is to “get inside” and try to do round trips on the components. That carrot slicer should take 10 pounds of carrots and turn them into carrot slices and stems. Stems plus sliced carrots should be 10 pounds unless you have a carrot leak. If you can’t find a way to peer inside the factory you need to have a conversation with your developer. They might not want to change the production code to expose the carrot slicing machine, but maybe they can add some debug code for you.


Round trip testing is a valuable tool to put in your automation toolbox.


There are a lot of effective ways to create high value test automation. Concentrating on delivering the fastest, most diagnostic and robust automation first will help your team deliver high quality products on schedule. Using round trip testing will let the product handle its own complexity and allow you to deliver powerful test automation quickly.


Good luck with round trip testing. I hope it makes your test code faster, simpler and you find better bugs.


Skip to main content