Lifecycle Test Automation
Automated testing involves many ideas in all aspects of the software lifecycle.
- Automatic test case generation from functional specifications.
- Automated unit regression testing using unit test frameworks.
- Automated test harness: ability to apply a complete set of tests to the current version
of a program under test.
- Automated test oracle: determining programmatically whether each test execution represents
success or failure.
- Automated test build: rebuild the set of tests in response to
new code check-in.
-
Automated test check-out and configuration: automatic check-out of tests to
multiple operating environments (hardware, OS, run-time library) and arranging for appropriate configuration
actions for each environment.
-
Automated test recording: results of each test suite execution automatically stored
in a data base.
-
Automated test reporting: producing readable summaries of each application of the test suite,
reporting the configuration and the test results.
- Automatic bug identification and tracking: automatically identifying failed tests for
entry into a bug database and tracking system.
- Automated resolution report verification: ensuring that bugs marked as resolved
by a programmer
actually are resolved according to test results.
- Automated debugging actions: automatically invoke debugging tools that
can perform initial analysis of encountered bugs.
- Automated test pruning: automatically reducing the number of test cases
that are executed in some cases, in order to shorten turnaround cycles and
make best use of available resources.
What to Automate?
It may not be possible to automate all activities in testing?
How do you decide what to automate and what to leave to manual work?
The general answer is this: automate as much of the regression test cycle as you can.
This cycle is the full cycle of activities in applying a set of tests to software
for the second or subsequent time. Shortening this cycle is key to
effective testing that minimizes the time-impact of testing on the SQA team
as well as making maximum use of the available testing resources.
What this means is that initial test set up is less important to automate.
In general, the cost of new manual test cases is paid only once, while
the cost of manual application of test cases could be paid repeatedly.
Test Metadata
The central requirement in large-scale testing is to have a systematic
approach to test metadata, that is, information about individual test
cases and test executions.
Unique Test IDs
Each individual test case should have its own unique and permanent identification
number or ID.
-
Such an ID then allows test results to be mapped back to information
about the test case through test ID lookup.
- Unique IDs are necessary to enable all sorts of automated test
analysis and tracking.
- The permanence of unique IDs allows test results at different points
in time to be correlated.
- Test case serial numbers represent one simple approach to test IDs.
- Test case URIs using a combination of repository path and timestamp
may be easier to manage.
Test Attributes
Once a system of test IDs has been created, then a database of
basic test attributes can be created and associated with this ID.
- Test case author and date.
- Test case requirement mapping: (e.g. category/choice identification).
- Environment attributes, such as OS, libraries.
- Test case input files.
- Test case setup scripts.
- Test case expected results data.
- Link to relevant bug tracking IDs.
- Source code coverage data for the test.
- Keywords or tags
Change-Centric Testing
With large scale test automation over a long period of time,
there may be too many test cases for daily use with software builds.
Change-centric testing works from source-code changes
to select test cases that specifically cover the particular source lines that change.
This can greatly reduce the cost of test execution and shorten feedback cycles.
Test Case Maintenance
In a test automation setting, each test case should be design for potential long-term
use in system regression testing.
Tests cases should hence be written for maintainability.
- Test cases should be well-written and well-documented.
- Avoid hard-coded data and paths in test cases.
- Prepare for test maintenance activities to be necessary when
environment conditions change.
Continuous Integration Systems
Continuous integration systems perform significant test automation
through automated builds triggered by commits to a source code repository.