One problem has always been faced by the software engineers which is that they have never been able to do the 100 percent testing of the software system or application that they develop. One of the reasons behind this is the high complexity of the software applications which make it almost impossible to test all the possible permutations and combinations of the execution paths.
For the software system or applications that operate under an interdependent environment, the goal of complete testing can never be achieved. When a testing process should be stopped is a question that has not got its answer yet.
However, the software engineers have listed some common factors that decide when the testing has to be stopped and these factors are stated below:
1. Test cases completed with a certain percentage of success result.
2. Deadlines including testing as well as release deadlines.
3. Code coverage, function coverage, requirements coverage meets a certain specified level.
4. Test budget depleted
5. The rate of the discovery of bugs is quite small to be considered.
6. End of the alpha testing period
7. End of the beta testing period
8. Risk involved in the software project is within such limits that it can be ignored.
But if this situation is taken practically, then it is always wise to stop the testing process when the amount of risk involved with the project becomes acceptable to the project management.
Testing process is such a process that if it was to be carried out completely would never end. But this is in the case of ideal testing process only. Practically, this is something impossible.
Therefore, in order to avoid this fix it is assumed by the software engineers that after a certain pre-defined level, the testing is 100 percent complete. This much risk has to be taken if the software product is to be delivered well within the stipulated period of time. The software product is then shipped to the customer with X amount of testing done. While testing large projects the risk is measured by a process called risk analysis.
However, for small projects it is possible to minimize the risk to a level where it can be ignored. This can be done by using the following methods:
1. Number of test cycles
2. Measurement of the test coverage
3. Number of bugs with high priority
– There is an organization working at SATC or software assurance quality center which is currently carrying out research on using the software error data for indicating the status of testing.
– This research is based on the projections of the number of errors found in the application under test or AUT and estimated time to dig out the remaining errors in the application software.
– For determining the number of remaining errors in the software one requires number of errors found and corrected during the testing process and no. of errors present at the start of the testing process.
– Several models have been developed that fit the rate at which the errors are found and one that is most popular is the Musa model.
– However, this Musa model is not fit to be applied at GSFC, may be, because of the quality and availability of the error data.
– SATC studied a few projects out of which only few programs had a system for tracking the errors but then also there is no such format consistent enough for the recording of the errors.
– In other projects, the error recording did take place but there was no recording of the resources that were used during the testing.
– The error data mostly consisted of entry dates of the errors rather than consisting of the actual dates on which the errors were discovered.
– Musa model is actually a traditional model and for using it one requires two basic things namely:
1. Precise data on the discovery of errors and
2. Level of resources applied to testing