Select the Right Set of Test Cases to Automate
With relentless pressure from the management to automate more and more test cases, test automation engineers need to have clear criterions what is suitable for automation and what is not.
Filtering out tests that are not suitable for automation
Test scenarios that obviously cannot be automated should be filtered out first. The following tests should not be automated:
• Tests that run only rarely (i.e. test that will run once a year)
• Tests where the software is unstable (early stage of development)
• Tests where the result is easily verified by a human, but it is difficult to automate – i.e. “verify that the icon is gray and white”
• Tests that involve manual interaction.
Modern automation tools allow automation of almost anything manual tester can test using GUI. However, the question is not “Is it possible to automate?” but rather “Is it easy to maintain?”
The fundamental test case suitability criterion for automation is low maintenance cost. Test cases that are repetitive are the best candidates for automation. Such test cases allow implementation of the Data-Driven approach, which in turn allows minimizing the amount of code and maximizing the amount of coverage.
Also, when selecting candidates for automation of a new feature or functional area, there are number of factors that can drastically affect automation efforts and influence rework. They are requirement stability and application stability. The ideal strategy would be to defer automation development until after the specific application release is deployed.
When this is not possible, automation should start only after:
• Requirements are signed off
• The application has passed at least one cycle of testing manually
Normally 20-30% of automation development time is spent on maintenance. It can potentially get much worse than this. Automating of a stable release that is not going through continuous changes will save maintenance effort and provides value.
Do not automate 100% of test cases!
Software that is tested manually will be tested with a randomness that helps find bugs in more varied situations. Since automated test cases usually don’t vary, they may not find many bugs that manual testing will. It’s important to remember that automated software testing is NOT a complete substitute for manual testing.
Industry statistics show that automated tests found 30% of all software defects, while manual testing found 70%. When trying to automate all test cases, time spent on automation of certain tests would never be recovered. That is not business savvy nor realistic. A general rule of thumb would be never to automate more than 40% of all test cases to ensure maximum benefit with the lowest investment cost.
Lifetime of automation scripts
The longer a specific application release will be supported and requires testing, the more often automation scripts will run. The value of the automated scripts is linked to how many times it will be repeated.
ROI (Return Of Investment) Analysis
Proper analysis is required to identify the correct test cases for automation to ensure Return of Investment (ROI) for the effort. At a fundamental level, automation ROI is driven by the coverage automation scripts provide and the time they save against the time it takes to automate and maintain those scripts. The following define ROI:
• Impact on Project Risk
• Gain in Execution Time
• Maintenance Effort
Impact on Project Risk
Choosing automation candidates that provide coverage for the most central/ challenging areas of application, will make a huge impact on the risk associated with the project. Impact is estimated by a range of two:
• Crucial/Large impact
• Medium/Low Impact
Test case repeatability is linked to how many times it would be repeated. The more times a test case runs within the same cycle, the higher ROI will be. Here is the criterion:
A test case is suitable for automation if it will be called more than twice during the cycle run. Example: data-driven test cases re-used multiple times with different data sets.
Gain in Execution Time
Automated scripts have the capacity to dramatically increase testing speed. Test cases that take a long time to execute manually are good candidates for automation. However, all factors affecting the time need to be considered: setup time, data preparation time, execution time, results analysis time. If automated test case saves at least 50% of the manual execution time it considered suitable for automation.
Automated scripts can be particularly sensitive to maintenance costs. Part of the problem is externally driven when the application’s interfaces or functionality is changing. Another aspect is internally driven— when automation teams makes infrastructural changes or upgrade versions of the automation tool-set. In both cases, the way to reduce maintenance is to utilize modular re-usable code and re-usable test cases as much as possible. The maintenance per test case will be significantly lower. This criterion is clearly linked to repeatability and utilizing data-driven approach.
ROI Analysis Table
The following questions need to be answered to define the automation ROI for the specific test case.
* Note: each ‘Yes’ contributes 0.25 to the total of 1.0 when all questions answered ‘Yes’. Test case considered a candidate for automation when its ROI is 0.75 or higher.