Quality Assurance (QA) plays an important role in IT product development. QA specialists use test design techniques along with other tools to detect errors and bugs.
Test design is a process of creating tests. Each test is aimed at checking a specific assumption. To check how a product behaves in different conditions, testers model a set of test cases. The specialist's task is to find a balance and detect the maximum number of errors with the minimum number of test scenarios. However, it is necessary to test all the critical cases, as the testing time is limited.
What are the goals of test case writing?
Test cases allow us to:
- Structure and systematize the approach to testing (without this, a large project almost certainly is doomed to fail).
- Calculate test coverage metrics and take measures to increase it (here test cases are the main source of information).
- Track the compliance of the current situation with the plan (how many test cases will be needed, how many are currently available, how many of the planned numbers have been executed at this stage, etc.)
- Improve better understanding between the customer, developers, and testers (test cases often demonstrate the behavior of the application in a clearer way than the requirements).
- Store information for long-term use and share experience between employees and teams.
- Perform regression testing and re-testing (which would be impossible without test cases).
- Improve the requirements quality (writing checklists and test cases are effective techniques to test requirements).
- Quickly onboard a new employee who has recently joined the project.
There are certain criteria used to determine how many and what kind of test cases are needed for a particular project. In software testing, they are called QA metrics.
Such metrics are various indicators and data that can help not only to see the current state of project progress but also to find ways to improve/increase the level of software testing efficiency.
Test Coverage is one of the metrics used to evaluate the quality of testing, and represents the density of requirements or executable code coverage by tests.
One of the approaches to evaluating and measuring test coverage is requirements coverage.
This metric demonstrates the current level of test coverage of all established software requirements. It is the most accurate when the requirements are atomic.
The metric is calculated using the following formula:
Test coverage of requirements = (number of requirements tested by test cases/total number of requirements)*100%.
To check this test coverage, it is necessary to divide all requirements into separate elements and then link each element to the test cases that test it. Test cases are completely useless if they can't test specific software requirements.
Each link created between test cases and requirements is called a traceability matrix.
By analyzing this connection, you can easily get answers to the following questions:
- Which requirements are tested by particular tests?
- Which requirements require creating and/or editing available tests?
Some requirements may include unnecessary test cases.
The metric of requirement coverage also has its flaws:
- Unless you use the connections between requirements and tests, it's impossible to measure requirement coverage. So you should be careful and take time for such activities.
- There might be «white spots» – those requirements that are not covered by the tests, so there is no way to tell if the functionality is implemented.
- Test cases should be reviewed daily. When a requirement changes, the test cases for those requirements become useless. Therefore, they should be removed from the test suite and of course, it requires time to identify such cases.
Other metrics which are used to assess the test case quality:
- Test Case Preparation Productivity: used to calculate the number of prepared test cases and the Effort spent on their preparation.
Test case preparation productivity = Number of test cases / Effort spent on test case preparation
- Test Design Coverage: the percentage of requirements covered by test cases.
Test design coverage = (Total number of requirements represented in test cases/Total number of requirements)*100
- Test Execution Productivity: specifies the number of test cases that can be executed per hour.
Test Execution Productivity = Number of executed test cases / Effort spent on executing test cases
- Test Execution Coverage: measures the number of executed test cases compared to the number of planned test cases.
Coverage of executed tests = (Total number of completed test cases/Total number of test cases to be executed)*100
- Test Cases Passed: used to measure the percentage of test cases passed successfully.
Test Cases Passed = (Total number of passed test cases/Total number of executed test cases)*100
- Test Cases Failed: used to measure the percentage of failed test cases.
Failed test cases = (Total number of failed test cases/Total number of executed test cases)*100
- Test Cases Blocked: used to measure the percentage of blocked test cases.
Test Cases Blocked = (Total number of blocked test cases/Total number of executed test cases)*100
So when are written test cases enough? If the designed test cases cover the entire functionality of the application, then we can say that there are enough test cases. Next, let's look at the requirements and characteristics that good test cases should meet.
A good test case has the following characteristics:
- Absence of any dependencies on each other (since tests can be added, modified, outdated, or deleted, and dependencies can create a misleading impression that the product is working as expected).
- Clear formulations, correct technical language, and a high probability of detecting an error.
- Consistent and clear steps and results, with no gaps in the information.
- Presence of detailed, but not excessive information (for example, in the case of an authorization process, the test case should contain login and password).
- Easy error diagnostics (the detected error should be obvious).
All the metrics described above make sense if QA has feature acceptance criteria and product requirements. But, in the real world of Agile processes, most likely, these requirements may not exist.
How should we write test cases without any requirements?
- Based on the ideal user experience. Most QA testers know how to maintain a great interaction with the user. Without knowing exactly what the product owner has in mind for a particular feature, it is still possible to estimate what actions will be most appropriate for the end user.
- Ask questions to product managers/developers. You may not get a complete list of details, but they might help with clarification. Make sure to ask specific questions. That way, you will get a quicker answer and minimize follow-up questions. For example, one can ask: «How should this button work?» But it's better to phrase it in this way: «When a user clicks this button, what section will they be redirected to?».
- Research similar functions in other apps/websites. If a feature is new to the app or website under test, it doesn't mean it's technologically innovative.
- Consider all possible interactions with the selected function. Look at every button that can be selected, and every possible combination of parameters. Even if it's something that the user will probably not do, it's still good to add to the test case. For example, «If the user enters numbers in the name field and tries to save, an error must occur».
- Use the developers' code for test cases. Ask the developers about the logic they used in the code. If you need to know whether the form fields should not be cleared after saving, you can ask the developers if the code is configured for this. If the code is written in such a way that it is expected to lead to a specific behavior and it doesn't happen on the UI, it will be obvious that it is a bug.
- Provide a list of what will be tested to the product manager. When writing test cases without requirements, it is possible to complete testing based on assumptions that differ from the expectations of the product manager, so it makes sense to keep them informed.
- Create a standard list of expectations for any function. Even if the list is short, some requirements can be expected for any feature. For example, if it's an app, it should work on both iOS and Android. Similarly, if it's a website, it should work in all major desktop and mobile browsers. General accessibility standards can also be included. For example, «A function must be accessible to people with hearing or visual disabilities.» If the software has paid subscription levels, only premium users should be able to access new features.
Writing test cases is now in high demand as a mandatory tool for project quality assurance. It is used as a form of providing important information about the product quality as a result of testing. Clearly defined test cases allow you to run the same test multiple times, using them for sequentially changing software versions. Also, they help track the software's regressive errors, which means the errors that are repeated and affect the product quality. A good test case is a combination of conciseness, specificity, and cleanliness. Finally, let's highlight five characteristics of the perfectly written test case: clear formulation, consistency of presentation, reasonable step detailing, competent technical language, and the presence of the basic attributes.