I am generally not a big fan of static data in test automation, but being a pragmatic person, I know there are clearly times when using data-driven testing is just plain common-sense. For example, data-driven testing is an effective automation approach when designing ‘black-box’ tests for testing an API.
Data-driven testing is a common approach to test automation where static test data is passed to application parameters and the expected result (which is usually also read from static data) is compared against the actual result. This automation approach is effective when the actual result compared to some expected result can be resolved as a Boolean outcome. In other words, if the actual and expected results match the outcome is true and the test passes; otherwise the outcome is false and the test fails. (Of course, if something occurs during the test where there is no actual result then that particular test is usually logged as indeterminate.)
Of course, the key to effective data-driven testing is the data! If we don’t identify the most appropriate data to use in the test then the test case may have holes and we might overlook important information or miss critical anomalies. If we have too much redundant data then we may be simply running unnecessary tests (yes, even with test automation redundant testing is not an efficient use of resources).
Let’s say we had to test an API method such as:
public bool IsValidNetBiosName(string name)
where the return value is true if the string argument passed to the name parameter is a valid NetBIOS name on the Windows operating environment; otherwise it returns false.
With a data-driven testing approach we could use a simple CSV file that contained the string arguments and the expected result for each string passed to the name parameter. A partial sample of the CSV data file would be:
(NOTE: null is a special case in which we need to convert the string “null” to a null in the test code, and the test above null is an empty string. An empty string and null are two different things and both must be tested in this case.)
Next, we need to read in the CSV file into our automated test, and perhaps the easiest way I found to read in a text or CSV file in C# is with the File.ReadAllLines method. The ReadAllLines method opens a text file, reads each line of text as an element in a string array, and then closes the file. Once we have a array of all lines in our data file, we simply need to parse each element in the string array into test data and/or expected result, and then compare the actual result against the expected result as illustrated in this example.
// Read each line in the entire CSV file into a string array
string testDataArray = System.IO.File.ReadAllLines("myTestData.csv");
// Iterate through each element in the test data file
foreach(string test in testDataArray)
// Split each element in each line into an array where the elements are the
// test data and the expected result
string testElement = test.Split(',');
string testData = testElement;
string expectedResult = testElement;
// Special case for passing a null to the API parameter
if (string.Equals(testData, "null",
if (string.Equals(api.IsValidNetBiosName(null).ToString(), expectedResult,
result = "Pass";
result = "Fail";
// Compare the return value against the expected result
else if(string.Equals(api.IsValidNetBiosName(testData).ToString(), expectedResult,
result = "Pass";
result = "Fail";
This is rather simple example, but data-driven testing is effective for unit testing, API testing, and can even be used in automated GUI testing (although data-driven automation may only have limited applicability in GUI automation). I am a firm believer in the KISS principle when it comes to developing automated tests, and the ReadAllLines method is perhaps the easiest and most efficient way to read in data file for data-driven development. Of course, data-driven testing doesn’t solve all problems. Chan Chaiyochlarb has a good post on some pitfalls to watch out for. But, in the right context, data-driven testing can be one approach used in automated testing.