When you are inside the code for an application, often this question doesn't get pondered.
It seems like an obvious question, "What to test?" "Duh, everything!" or "What I just implemented."
But really, testing everything is often out of the question for a reasonably complex application, The permutations of states and settings grows quickly. If you have 4 settings which can each be one of two values, that's 16 (2 x 2 x 2 x 2) different combinations to be tested. Add another setting of only 2 choices and you have 32 combinations. If each setting is one of three values, there are 81 (3 x 3 x 3 x 3) combinations. I think you can see the problem. If you have a good unit testing framework that will let you automate generation of numerous test cases, that can help a lot. But you still need to consider the question and the answer and decide if you are testing the right parts of the application , whether it's unit tests, manual tests, system tests, whatever.
There are a number of different approaches you can think through to analyze your testing, which can help answer the question of what to test and put together a plan for effective testing. The first idea is to think about where risks are. Often this is in the newest code, the most complicated or the areas with the most demanding performance requirements. Look for long routines and count the conditional statements if you want to make it a quantitative analysis.
Looking at the interfaces is a different angle for analysis. Not just application programming interfaces, but also network communications, shared memory, library calls, database queries and data acquisition are all types of interfaces. Interfaces are often the areas where there are more edge cases to test and where there is more chance for input values to wrong.
Often using probabilities can help focus testing efforts. What are the most likely code paths? Here you can make use of logging and historical information from your software. Looking at your application from the outside-in or top-down plays well in terms of looking at features for testing.
It seems like an obvious question, "What to test?" "Duh, everything!" or "What I just implemented."
But really, testing everything is often out of the question for a reasonably complex application, The permutations of states and settings grows quickly. If you have 4 settings which can each be one of two values, that's 16 (2 x 2 x 2 x 2) different combinations to be tested. Add another setting of only 2 choices and you have 32 combinations. If each setting is one of three values, there are 81 (3 x 3 x 3 x 3) combinations. I think you can see the problem. If you have a good unit testing framework that will let you automate generation of numerous test cases, that can help a lot. But you still need to consider the question and the answer and decide if you are testing the right parts of the application , whether it's unit tests, manual tests, system tests, whatever.
There are a number of different approaches you can think through to analyze your testing, which can help answer the question of what to test and put together a plan for effective testing. The first idea is to think about where risks are. Often this is in the newest code, the most complicated or the areas with the most demanding performance requirements. Look for long routines and count the conditional statements if you want to make it a quantitative analysis.
Looking at the interfaces is a different angle for analysis. Not just application programming interfaces, but also network communications, shared memory, library calls, database queries and data acquisition are all types of interfaces. Interfaces are often the areas where there are more edge cases to test and where there is more chance for input values to wrong.
Often using probabilities can help focus testing efforts. What are the most likely code paths? Here you can make use of logging and historical information from your software. Looking at your application from the outside-in or top-down plays well in terms of looking at features for testing.
There's always something to test. Of course, these analyses can also help inform your decisions on what not to test as well. This is equally important, as you want to avoid diminis hinges returns on your testing efforts.
Good ideas on what to do instead of test is here: https://users.ece.cmu.edu/~koopman/des_s99/sw_testing/#alternative
Comments