A products user interface is one of the most challenging areas to increase testability. It’s also the place where our customers will be using the most. It’s important to make your user interface code as testable as possible.
The metaphor of the UI is appealing to us subconsciously. Sometimes this tricks us into thinking in terms of the metaphor and not the code.
Take the example of the calculator. What you see on the screen isn’t the same thing that’s sitting on your desk. It’s a hollow shell. It’s Just a picture really. Disengage from your products metaphor from time to time to be able to test the UI more effectively.
Avoid exploding test matrixes.
UI automation is a tricky business. There are a lot of things to consider. Just finding and clicking a button can be a challenge from an automation point of view. When you are using automation to test the UI you need to remove the rest of the product from the picture if you don’t want your test matrix to approach infinity.
Scenarios are customer focused at a cost.
A simple ten digit four function calculator has 40 billion possible scenarios before you count the decimal point and sign change buttons.
In order to hit every one of those cases in a week you would need to do over 66,000 test cases per second 24/7.
Scenario testing is a Good Thing™. Make sure you are testing core customer scenarios. Just realize the number of possible test cases you can possibly cover with scenarios is a drop in the bucket compared to total test coverage.
I don’t want you to think that because of the massive test surface most applications present you will never be able to test a significant fraction of them. The trick is to make your applications testable. Remember that if you have to do everything via the UI you will have the exploding test matrix problem.
Test and Automate bottom up.
You can beat the problem of exploding test matrixes by using a bottom up approach. Write the simplest most diagnostic tests first and run them a lot. Unit test are great because they are fast and very targeted. A really good pattern is for the developer to write a unit test and the tester to expand it to 3 or 4 more similar tests. High boundary, low boundary and error tests are the usual additions.
Next automate testing for your component. Isolate it from the product and test its inputs, outputs and error handling. If you can’t do this, you may need to refactor your design. If you think refactoring the design is too expensive, just remember to factor in all the lost test coverage.
Move on to the API testing (sometimes this is the same as a component). APIs are cool because you can often use automation to generate the test cases. A standard API could easily have as many as 40,000 automated test cases that can run in a few minutes.
Lastly do the scenario tests. They are the most finicky, time consuming tests to create. You almost always end up tweaking them a lot and they are a challenge to make robust. If you were to start at the scenario level you could easily blow your automation budget on the “BVT” scenarios. If you start at the bottom you can make steady progress and give the core scenarios the attention they deserve, because you know you are building on a solid base.
Deliver simple tests early.
Delivering high quality software on time is a tough challenge. By closing the feedback loops in the process and making them as short as possible we can increase productivity. AKA working smart.
The shorter the interval between the time a developer creates a defect and the time the defect is noticed the better. The fixes are better, the developer increases their skills faster, there are fewer last minute costly fixes. Everyone wins.
An automation suite that hits 100% of the test cases but takes 2 months to deliver is much worse than a handful of component tests that are ready to run the day after the code is written.
If your software is built with testability in mind, you can incrementally release test automation as you create it. Software with poor testability will cause the average time to find defects to move out to the horizon.
Separate controls from the product.
When I am in a program and click a control I expect something specific to happen in the product.
What I rarely think about is that I am actually using a layered system to accomplish my task. If my product introduces a new kind of control (or an enhanced version of a known control) it’s tempting to “stick with the metaphor” and test the control in the context of the application.
Developers have asked me “How do I even write a unit test to test these kinds of UI controls.” It’s a tricky problem without a super clean solution. The crux of the issue is to get the control away from the context of the application. A good Unit test for a menu bar might be to generate a menu bar outside your application. Maybe a standalone test executable is a good idea. Since you can utterly control the look, feel and contents of this form, you can then open up testing in a way that you couldn’t with the product.
In the product each menu item might open modal windows and do all manner of other crazy stuff. At some point that all needs to be tested, but we want to have a high level of confidence in the control when we get to the integration testing and not try to test them all at the same time.
Refactor UI elements into re-usable components.
Look critically at your application and ask about what common things you are doing throughout the application. One of the tougher testing problems when looking at a any application that accepts input is separating testing the input validation from testing the product business logic.
A simple form with five inputs could have four validation test cases per input. That’s twenty test cases. However, if there isn’t a way to test the validation one input at a time, your test cases may have to be run in all the possible combinations (Just to be “safe”). Five factorial is one hundred and twenty. That bloats the test pass by a factor of six. What would only have taken one day now takes all week.
The more the complicated the software the worse the test matrix explosion will be. Also notice that if you can encapsulate the input validator for example, you also provide a powerful test hook. It much better from a testability perspective to spend time on this type of refactoring than worrying too soon about performance. Don’t performance tune till you know where the bottlenecks are. You can start this kind of refactoring as soon as you have a couple of modules written and the payoff is huge.
Refactor business rules out of UI.
Another symptom of getting caught up in the product metaphor is tightly coupling business rules to the UI.
Take a hard look at your product and find the rocks your business rules are hiding under. When you find them, factor them out of the UI.
Another way to say this is to say the UI should be nothing but an empty shell with pretty pictures. It shouldn’t know anything about the data that’s not completely generic. A drop down list should just consider the data it contains to be a list. It shouldn’t know things like how to look up the names and color code by category.
Test the UI and components separately.
If you can successfully decompose the UI into smaller-reusable building blocks you gain some powerful effects in the test side.
Take advantage of that power by doing as much testing outside the product as you can. Polish the components until they gleam. Work on the APIs until they are rock solid. Test the user interface independently.
This closes and shortens the feedback loops .
Stand on a solid base for integration testing.
Follow these recommendations and your integration testing will be on very solid ground.
We often think we can test the components “for free” from the UI during the scenario testing. There are two big drawbacks to this. First, it takes a nearly infinite number of scenario tests to get to all the corner cases. Second, bugs can be tough to pinpoint when we find them.
When the foundation underlying the UI is rock solid you can do integration testing with confidence. You can rely on your core customer scenarios to smoke out the serious integration problems. The likelihood of serious bugs hiding in the corner cases goes down dramatically. The number of scenarios you need to run to have confidence in your product goes down.
When you do find bugs you can start from the premise that they are integration bugs. You don’t have to unwind the stack and identify the buggy component as often. Try to test all your components “for free” with integration testing and you will burn a lot of time investigating bugs.
Disengage from the product metaphor and make your testing
more less “black box”.
Think about how to get under the hood of your application. Once you can take the layers apart and give them a good workout you can feel better about how they work in the package.
Testing product UI and the code that’s closest to the UI can be one of the biggest challenges software engineers face. Go and figure out how to test your application better today.