Software Testing 101; How to get started testing.

Software Testing 101; How to get started testing.

There are a lot of resources on the web about software testing. For the most part they are fragmented and tactical. It can be confusing to know where to start. You want to be able to pull it all together and use it in your job. Here is a crash course in software testing. It took me years to learn some of the “obvious” stuff. This post covers some very basic stuff. If you already have a good handle on test planning and execution, you probably won’t learn much. If you are thinking of moving into software testing from college or another profession, this should be very useful to you. If you work for a small company and have to invent testing for yourself, this is where you start.

Be methodical

The number one thing I look for in hiring testers is a methodical, repeatable approach to testing. Every other testing skill is less important than this. This is the number one area most candidates fall down in.

Make everything as simple as possible, but not simpler. -Albert Einstein

Being methodical means having a plan. Plans go from mere sketches on the white board to hundred plus page documents that get peer reviewed and archived for eternity. The elaborateness of your plan is proportional to the time it will live and the distance it will go. If you are writing a plan to test software so large that it will take a year to complete the plan, you need a bigger more robust plan. If your plan is being shipped to another time zone or country for other people to use, it needs to be bigger and more robust. Make your test plan as simple as you can. But don’t skimp on the details if there is a reason to need them down the road.

Go from most important to least important

I see testers who have a “go for the throat” mentality. Every test they do is designed to crash the product. Finding crashing bugs is fun, but 99% of the important bugs aren’t crashing bugs. In fact some of the most important bugs you will ever report will be debatable as bugs. Bugs like; “This feature is hard to discover,” “New users might find this confusing,” and “Why do we need this feature” are where really effective testers make life better for users. Always ask what the user would think is important. The first thing a user wants is for the software to do what it should in a way that’s understandable. Once you can prove that, then go find your crashing bugs.

Plan Basics

There are many kinds of tests we can do to software. You test plan should consider at least the following. It’s OK to have a section be NA (not applicable) with a short justification. In sorted importance order here are the sections every test plan should consider. You should be able to compare and contrast any two of these. If you don’t understand a type of testing, do some research. This isn’t an exhaustive list, but it’s a good place to start. If you drew this list as Venn diagram, there would be lots of overlap in many of the categories. So don’t get hung up on crisp distinctions. As you go down the list the relative importance gets fuzzier. The top four are always important. After that reasonable people will disagree. There are important concepts like exploratory testing that get lost when you go from a script like this. Don’t worry about them until you have the basics down. Any good tester simply must be able to plan and execute on a script like this one.

1. Functional Positive - Does the software do what we expect when the user does what we expect?

a. Nominal cases. (well within the expected limits)

b. Positive boundaries. (Lots of bugs show up at the boundaries.)

c. Build Verification Tests*. - Usually a small subset of a & b, be sparing. People overdo BVT’s a lot.)

2. Functional Negative - Does the software fail gracefully when the user does something we don’t expect?

a. Nominal cases. (well outside the expected boundaries)

b. Negative boundaries.

3. Integration testing- Does the feature work with the rest of the system.

a. Core use scenarios

b. Acceptance scenarios

c. Corner cases scenarios

d. Hardware matrix

e. Os Matrix

4. Security - Can someone do something we don’t want them to do?

5. Usability - Is the software easy for the target customer to use?

6. Testability - Can we easily test the feature?

a. Manual testability

b. Automation testability

7. Performance, stress and long haul. AKA the torture tests.

8. Globalizability - Can the product be easily translated for different regions/uses.

9. Localization - Is the software correct for a particular region/use.

* BVT’s are the most important tests. Period. If you don’t have time for anything else, run these tests.

What and How are different

The basic plan says what you should be testing for. Not how. For example you may know the difference between “White Box” and “Black Box” testing. These are techniques that fall under how you test. Manual testing versus automation is another set in this category. I am a big believer in automation, but only if it can cover the bases I listed above. You need to know about most of the tools that fall under how, but they are just tools to get the software tested. Don’t get hung up on them too much as long as you can cover the above bases.

Don’t do too much testing.

Strange words from a tester, I know. It’s really important you grasp this concept if you want to succeed as a tester. If you don’t you will be heading for frustration and burnout. The words “Just in case” are the biggest productivity killers in the test world. Consider a ten digit calculator program has around 40 billion possible test cases. You would do them all “just in case”. In more complex software the sun will go out before you get to all the “Just in case” stuff. Your return on investment goes down quickly as your drill into an area. Be mindful of when you are hitting a point of diminishing returns. In the calculator there are about 20 really great tests. Up to test 100 is worth your time. Somewhere between test 101 and 1000 you are starting to waste time. You will be spending more time testing that the end cost of your product can justify.

Be evil

The second trait I look for in a testing hire is the ability to be evil. In other words, violate the expectations of the developer and find bugs. I would hire someone for a temporary position that was just super methodical and organized. However, if you want a permanent spot on my team you need to be able to think around corners. This section is shorter, because it really has to come from within. Just be aware that being organized isn’t enough. You have to have passion for testing.

Delight in breaking stuff

Breaking software and finding bugs is fun. You have to really enjoy finding your way into places no one expected you to go. An evil laugh when you figure out a diabolical way to crash a product is a real plus.

Keep your evil focused on the product and not the authors.

Any developer who is working efficiently will create bugs. Period. You must be able to form a close working relationship with someone who’s livelihood you are paid to criticize. You won’t get a lot of mileage being condescending or abusive to developers. A software tester’s job is to improve the software while staying on friendly terms with the developer. Inexperienced developers should be getting better just by reading and understanding the bugs you bring to them. More advanced developers should be giving you hints where they feel bugs might be hiding. Remember that creating things is what a developer does. If you go in like a 2 year old bully and kick over the blocks all the time, you will damage your ability to improve the product in the long run. When you ask questions like “Will the users find this useful?” the developer will just give the stink eye.

Channel the customer

When you are looking for problems in the product, the most compelling ones have a good customer story behind them. A bug that reads “Product has an expected exception that causes a reset” isn’t very compelling to fix. For example, if you use another program to corrupt the products memory and then it crashes. (I have even seen people argue that a software product should be more robust in the case that the customer pulled a chip out of the computer while it was running!). However the same bug with a reasonable customer scenario is a lot more compelling. “Common software package X corrupts memory and our product isn’t hardened against it” is a lot more compelling. Always remember we are making the software better for the customers. Sometimes that means stepping into parts of the code they could never see to make sure it’s robust. Other times it means thinking and acting just like they might or understanding how they could blunder into part of the product where they will get bitten.

The bottom line

Testing is and art and a science. If you want to be a good tester, master the science part. Some really good testers I know are test case machines. They churn out cases and run them at a prodigious rate. They give us a high level of confidence that the software will do what the customer expects. Excellent testers know when the methodical approach is starting to reach the point of diminishing returns and adjust their game plan accordingly, but they start with a solid foundation.