Oracles are hard...

In a previous post, I mentioned that when writing automated tests, the grey area between pass and fail can be confusing. A point I didn't mention is that often, just determining pass or fail can be hard.

Take the Win32 API CreateFile, for example. The CreateFile function in the Windows API creates a new file, or opens an existing file. If it succeeds, the function returns a handle (a unique integer value) to the file, and if the function fails, it returns an error code. You could test this function in a trivial manner by checking the return value to determine the test status.

TEST_RESULT TestCreateFile(void)
{
HANDLE hFile = CreateFile(...)
if (hFile == INVALID_HANDLE_VALUE)
{
return TEST_FAIL;
}
else
{
return TEST_PASS;
}
}

This “test” really only determines if the CreateFile function returns a value. A significant amount of additional testing is necessary to determine if the function actually worked to determine an accurate test result. A tester may create an oracle (or verification function) to aid in determining the test status.

TEST_RESULT TestCreateFile(void)
{
TEST_RESULT tr = TEST_FAIL;
HANDLE hFile = CreateFile(...)
if (IsValidFile(hFile, ...) == TRUE)
{
tr = TEST_PASS;
}
return tr;
}
BOOL IsValidFile(hFile, ...)
{
/* ORACLE:
check handle value for INVALID_HANDLE_VALUE,
determine if the file exists on disk,
confirm that the attributes assigned to the
file are correct,If file is writable, confirm that it
can be written to, and do any other applicable
verification.
Return true if the file appears valid, otherwise,
return false
*/
}

The difficulty with oracles is accurately predicting the result of the operations they are verifying. Accurate oracles require extensive knowledge of the functionality under test and clear documentation of the intent of the functionality. At a minimum, they must verify success, but they also must verify a variety of environment and program changes that occur in parallel, or as side effects of testing functionality.