Unit Testing Apache Cordova Apps with Visual Studio, Part 2

Part 1 of this post provided some background on unit testing with a basic example in Visual Studio for an app written with Apache Cordova. In this post we’ll make many improvements to that example using the process of test-driven development, which helps you separate the process of thinking about how to challenge a unit of code to fail from the process of coding that unit to make it not fail! We’ll also discuss a little about debugging unit tests.

Again, these two posts are a shorter version of walkthroughs that we’ve recently added to the documentation for the Visual Studio Tools for Apache Cordova, in the section Author & run tests.

 

Test-driven development

It should be obvious that having one test for an empty (or overly simplistic) implementation of normalizeData is woefully inadequate. A robust normalizeData needs to handle all sorts of bad JSON that would cause utility functions like JSON.parse and/or property dereferences to fail. And of course we need to have tests that exercise all the data variations we can think of.

Question is, where to start? Write code, write tests, or go back and forth between the two? You’ve probably done plenty of the latter: write some better code into normalizeData, then write some tests, examine cases that fail, then write more code, write more tests for cases you haven’t covered yet, correct the code again, and so on. Although this is manageable, it forces you to switch back and forth between thinking about code and thinking about data, and switching between trying to make the code work properly and thinking up ways to make it fail. These really are two different mental processes!

This is where the approach of test-driven development shows its value. To fully exercise any piece of unit code, you eventually have to think through all the necessary input variations. Test-driven development does this thinking up front, which is very efficient because once you think of a few tests cases, you can quickly see additional ones. For example, if you think to include an object inside a piece of JSON, you’ll naturally think to include an array as well. If you think to include an integer where the unit code might expect a string, it’s natural to also try a string where it expects an integer.

With the normalizeData function, I found it took perhaps 15 minutes to think through a strong set of test inputs with different challenges to JSON.parse, challenges to assumptions about the structure of valid JSON, and challenges to assumptions about data types. Here’s that list:

'{"Name": "Maria", "PersonalIdentifier": 2111858}'
null
''
'blahblahblah'
'{{}'
'{{[]}}}'
'document.location="malware.site.com"'
'drop database users'
'{"Name": "Maria"}'
'{"PersonalIdentifier": 2111858}'
'{}'
'{"name": "Maria", "personalIdentifier": 2111858}'
'{"nm": "Maria", "pid": 2111858}'
'{"Name": "Maria", "PersonalIdentifier": 2111858, "Other1": 123, "Other2": "foobar"}'
'{"Name": "Maria", "PersonalIdentifier": "2111858"}'
'{"Name": "Maria", "PersonalIdentifier": -1}'
'{"Name": "Maria", "PersonalIdentifier": 123456789123456789123456789123456789}'
'{"Name": , "PersonalIdentifier": 2111858}'
'{"Name": 12345, "PersonalIdentifier": 2111858}'
'{"Name": {"First": "Maria"}, "PersonalIdentifier": 2111858}'
'{"Name": "Maria", "PersonalIdentifier": {"id": 2111858}}'
'{"Name": {"First": "Maria"}, "PersonalIdentifier": {"id": 2111858}}'
'{"Name": ["Maria"], "PersonalIdentifier": 2111858}'
'{"Name": "Maria", "PersonalIdentifier": [2111858]}'
'{"Name": ["Maria"], "PersonalIdentifier": [2111858]}'
'{"Name": "Maria", "PersonalIdentifier": "002111858"}'
'{"Name": "Maria", "PersonalIdentifier": 002111858}'

Again, once you get thinking about input variations, one test case naturally leads to another. And once you’ve worked out JSON variations for one unit test, you’ve essentially created a reusable asset for any other function that takes JSON, in whatever project.

With the input list in hand, it’s now a simple matter—with a lot of copy and paste!—to wrap the necessary test structure around each set of inputs. For example:

it('accepts golden path data', function () {
    var json = '{"Name": "Maria", "PersonalIdentifier": 2111858}';
    var norm = normalizeData(json);
    expect(norm.name).toEqual("Maria");
    expect(norm.id).toEqual(2111858);
});

it ('rejects non-JSON string', function () {
    var json = 'blahblahblah';
    var norm = normalizeData(json);
    expect(norm).toEqual(null);
 });

it('accepts PersonalIdentifier only, name defaults', function () {
    var json = '{"PersonalIdentifier": 2111858}';
    var norm = normalizeData(json);
    expect(norm.name).toEqual("default"); //Default
    expect(norm.id).toEqual(2111858);
});

it('ignores extra fields', function () {
    var json = '{"Name": "Maria", "PersonalIdentifier": 2111858, "Other1": 123, "Other2": "foobar"}';
    var norm = normalizeData(json);
    expect(norm.name).toEqual("Maria");
    expect(norm.id).toEqual(2111858);
});

it('truncates excessively long Name', function () {
    //Create a string longer than 255 characters
    var name = "";
    for (var i = 0; i < 30; i++) {
        name += "aaaaaaaaaa" + i;
    }

    var json = '{"Name": "' + name + '", "PersonalIdentifier": 2111858}';
    var norm = normalizeData(json);
    equal(norm.Name).toEqual(name.substring(0, 255));
    equal(norm.Name.length).toEqual(255);
    expect(norm.id).toEqual(2111858);
});

it('rejects object Name and PersonalIdentifier', function () {
    var json = '{"Name": {"First": "Maria"}, "PersonalIdentifier": {"id": 2111858}}';
    var norm = normalizeData(json);
    expect(norm).toEqual(null);
});

Note that the test names (the first argument to it) are what appear in UI like Test Explorer, so they should always identify what’s being testing and the basic nature of the test (e.g. “rejects” or “accepts”). Also, it’s certainly possible to put inputs and expected results into an array and iterate that collection instead of writing each test individually. I’ve just left everything explicit here.

The point here is that by first spending something like 30 minutes of focused work on input variations and wrapping them in unit tests, you’re now entirely free to focus on writing the code without having to wonder (or worry!) whether you’re really handling all the possible inputs.

In fact, if you run all these tests against an initially empty implementation of a function like normalizeData, many or most of them will clearly fail. But that just means that the list of failed tests is your to-do list for coding, mapping exactly to the subset of input cases that the function doesn’t yet handle properly. You improve the code, then, to pass more and more tests. When the function passes all the tests, you can be completely confident that it can handle all the possibilities it should. For a full walkthrough, which includes the completed implementation of normalizeData, see Improving the unit tests: an introduction to test-driven development in the documentation.

Test-driven development, in short, cleanly separates the task of coming up with input cases from the task of writing the code to handle those cases. Ultimately, if you’re going to do a really good job testing your code, you’ll have to separate these tasks one way or another. By acknowledging this up front, test-driven development can result in more robust code earlier in the development process, which can reduce your overall costs.

 

Debugging tests and runtime variances

One of the unit tests shown in the previous section is buggy. Do you see which one? I thought I’d written a complete implementation of normalizeData, which included code to handle the 'truncates excessively long Name' test case, but that case was still failing and I couldn’t see the problem right away.

Fortunately, Visual Studio gives you the ability to debug unit tests just like you can debug any other kind of code, with the ability to set breakpoints, examine variables, and step through the test code. Unit test code is still code, after all, and just a prone to bugs!

Setting breakpoints in the Visual Studio editor isn’t enough, however. Running tests through Test Explorer doesn’t pick up those breakpoints because a test runner like Chutzpah is spawning separate PhantomJS processes to load and execute all the JavaScript, but the Visual Studio IDE doesn’t have debug hooks into that engine.

You have instead use the Test > Debug > Run menu, where you’ll find options for Selected Tests and All Tests. You can also right-click a failed test and select Debug Selected Tests. Any of these commands will, for JavaScript, instruct the test runner to execute code in a browser, to which Visual Studio can attach the debugger. Reports from the test framework will also appear in the browser.

Gotcha alert! The Test > Debug > Run > All Tests and Test Explorer > Run All commands automatically save any files that you’ve modified in the project. However, commands that run individuals tests, do not save changed files, which can lead to some confusion when an expected change isn’t picked up. So be sure to save prior to running individual tests.

In the debugger, I was able to see that my unit test mistakenly tries to dereference norm.Name rather than norm.name:

After this fix I had just a single remaining test to satisfy, one that tested having a leading zero on an integer value in the JSON. I wasn’t sure why it was failing, so I started the debugger for that test as well. But in the debugger, the test passed! What was going on?

Turns out that I found a small difference between the implementation of JSON.parse in the PhantomJS runtime, which was used when running tests outside the debugger and throws an exception with the leading zero, and the Internet Explorer runtime, which was used during debugging and is just fine with the leading zero. See the topic Debugging unit tests in the documentation for that story!

What’s important to keep in mind here, is that you may find a few similar differences between the runtime used for unit testing and the platform runtime used when the app runs for real. For this reason it’s a good idea to occasionally run all your unit tests on each mobile platform. To do this, adding a special page to a debug build of your app that runs tests when you navigate there. In this case, the act of navigation serves as the test runner, and you’ll need to explicitly reference the test framework libraries so that they’re loaded for that page.

 

Closing

That wraps it up for parts 1 and 2 of this post, which is again a shorter version of content that we’ve published in the Author & run tests section of the documentation. There you’ll find more detailed walkthroughs, along with a discussion about using “mocks” to handle calls to platform APIs and other external dependencies that the unit testing runtime won’t have access to. Let us know what you think!

I’d also love to hear how you work with unit testing in Cordova apps, how you’re working with UI testing (both manual and automated), and how we can further improve our support through the Visual Studio Tools for Apache Cordova. Add a comment below, drop me a line at kraigb (at) microsoft.com, or submit suggestions via http://visualstudio.uservoice.com/.

Happy testing!

clip_image010

Kraig Brockschmidt, Senior Content Developer, Visual Studio

@kraigbro

Kraig has been around Microsoft since 1988, working in roles that always have to do with helping developers write great software. Currently he’s focused on developing content for cross-platform mobile app development with both Xamarin and Cordova. He writes for MSDN Magazine, is the author of Programming Windows Store Apps with HTML, CSS and JavaScript (two editions) and Inside OLE (two editions, if you remember those) from Microsoft Press, occasionally blogs on kraigbrockschmidt.com, and can be found lurking around a variety of developer conferences.