Test automation is mantra of success in today’s IT world and is used as a strong qualifier to win new projects yet often doesn’t add much value to client. Although automation promises to save cost and effort dramatically but they rarely succeed. Project where RUP is followed, requirements are implemented in phased manner, automation can be very fruitful but it is only possible if RUP is understood and if the same is applied to automation too. Testing team should follow ATLM (Automation Testing Life-cycle Methodology) along with Rational Unified Process. The key is to start automation right from the inception phase and to continue till transition to get ROI. This paper presents the process and guidelines to be followed by Managers and Automation Test Leads. Follow these steps as you staff, tool, or schedule your test automation project, and you will be well on your way to success.
I have seen lot of many test automation problems. I have worked in many automation projects, big and small. I have talked to many people from many other projects. I am presenting this paper to avoid these problems faced by them. But first we need to understand it. Let me illustrate with a fable.
Once upon a time, we have a pilot test automation project for a very important client. Client is very quality conscious and provides the team with Rational Testing Tools. Development team has decided to follow Rational Unified Process. Mr. Rakesh is a Test Manager who hires experienced testers with good programming knowledge. Everyone is very excited as they are going to work with best tools in the market and no more excel sheet to write test cases, track defects and to generate report.
Project starts and Rational RequisitePro is used for requirement management, development team uses Rational Rose to design and ClearQuest for configuration management. Testing team doesn’t have any clue what so ever what going on. Then Rakesh realized and instructed team to start writing test cases using Test Manager. Testers found TestManager as great tools and wrote some test cases based on the requirements. By the time development team releases the first build to test. Testing has started executing test cases manually and some how ended up.
Project has moved to next phase, elaboration and they are designing the data model along with coding for critical modules. Testing doesn’t know what to do it in this phase and they are waiting for the build to come. Testing team expands test cases in Test Manager and leave early to house.
Project moves to Construction phase and build is given again with most of the functionalities implemented and testing time is finding difficult to manage time efficiently. Defects are found and development team starts fixing the defects. Then Rakesh instructs all the testing members to automate the application and they do some quick record and playback. After spending lot of effort, testing team is ready is record and playback suite. Again, build comes and testing team fails to run even a single script properly, comes to the conclusion that slight maintenance is required but once again they fail and then again fail.
Client feels organization is incapable of doing automation; Rakesh feels that team is inefficient, team feels testing tool is not up to the mark. Rajesh asks for release, testing team members feel automation is of no use, manual testing is better. Project is a failure.
That's my fable. Perhaps parts of the story sound familiar to you. But I hope you haven't seen a similar ending. This paper will suggest some ways to avoid the same fate
This fable illustrates several problems that plague test automation projects:
Automation at last moment: There is a general trend and people think that automation should come at the end of the project. Managers don’t focus on automation right from the initial phase.
Automating only regression candidates: This is a myth that only automation candidates should be automated and regression testing come late during the project. It pushes the automation to the last moment.
Short of time: Testers often feel that time given is insufficient for testing. It is because testing is manually all the time and automation comes into picture only in the end very near to deadline.
Automation takes less time: Managers think that automation takes less time and can be done in couple of days and then time can be saved. Reality is automation takes more time than manual testing and is only useful in long run.
Testing RUP project is no different: Managers think RUP project is just any other development model like Waterfall where product will be given as a whole.
Any tool will do: Managers don’t consider whether tool can be used for the AUT, they just go by the popularity and later realize that it is not good.
Follow the Rules of Iterative Automation
This paper will be organized by the normal steps that we all use for our test automation projects, making special notes of the considerations and challenges that are particular to test automation:
1. Test Tool Acquisition
2. Test Framework Selection
3. Test Planning and Development
4. Test Execution
5. Analysis & Assessment
Step 1: Test Tool Acquisition
Role: Test Designer & Test Manager
Activity: Define Test Environment Configuration
Artifact: Test Environment Configuration
Test Tool Acquisition is an activity which should be done early in the Inception phase. During Inception phase, very few requirements are implemented to verify the feasibility. This is the phase when the automation tools should be evaluated and POC should be done. In this paper, out of other tools like QTP, Winrruner and Robot, Rational Robot came out as winner and was selected to automate the application-under-test.
Step 2: Test Framework Selection
Role: Test Designer
Activity: Select Automation Framework
Artifact: Test Automation Architecture
It is very important to select an appropriate automation framework. Different options available are Record/Playback, Module centric framework, Data-Driven framework and Keyword-Driven framework. It is preferable to start analyzing the option right from the inception phase. POC done in the inception phase will help Test Designer to select the framework which best suite the need. No. of iterations planned, size of the project and nature of the requirements are also considered as valuable inputs while selecting an automation framework.
Elaboration phase is the one when automation framework selected is frozen and making any changes after this can be risky and can involve lot of rework. In this feasibility of the framework is validated by automating the most critical requirements in Elaboration phase. A composition of various test automation design and implementation elements and their specifications that embody the fundamental characteristics of the test automation software system.
This artifact is particularly useful when the automated execution of software tests must be maintained and extended through multiple test cycles. This artifact is most useful as single artifact per project.
Step 3: Test Planning & Development
Role Test Analyst
Activity: Identify Test Ideas, Identify targets of test
Artifact: Test Idea List, Test Cases, Test Data
Test Analyst identifies the test cases to be automated and selects the automation candidates based on the following criteria:
Identifying and defining each Test Case, and approving all subsequent changes to it.
Ensuring that changes are communicated to affected downstream roles.
Ensuring that sufficient Test Cases have been identified to provide satisfactory evaluation of the Target Test Items.
Ensuring that sufficient detail has been provided to implement and conduct the test.
Managing and maintaining appropriate traceability relationships.
Managing the appropriate scope of the Test Cases in a given iteration.
Test Analyst is also responsible for Test Data:
Identifying potential data sources.
Gathering basic candidate Test Data.
Verifying the completeness, fitness for purpose and accuracy of the Test Data.
Test cases to be automated are present in Rational Test Manager and this activity is generally completed in Elaboration phase. Once all the test cases to be automated and test data is ready, automation can be done in full fledge manner in construction phase.
Activity: Implement Test, Implement Test Suite
Artifact: Test Script
This is most important activity which is carried out exclusively in Construction phase but you must have read in earlier sections that POC is done in inception phase and critical test cases are automated in Elaboration phase. Henceforth, automation is done for other automation candidate including the most important regression set of test cases.
Identifying and defining each Test Script, and managing all subsequent changes
Ensuring the Test Script accurately reflects the required test, identified by one or more a Test Ideas or defined in one or more Test Cases
Ensuring the Test Script is implemented according to defined standards to be compatible and maintainable with the other Test Scripts
Ensuring the Test Script makes reasonably efficient use of the available resources
Developing the Test Script with a focus on economy of effort and identifying opportunities for reuse and simplification
Developing the Test Script so that it can be used as part of a Test Suite
Step 4: Test Execution
Activity: Execute Test Suite, Analyze Test Failure
Artifact: Test log
Phase: Construction & Integration
Though script execution is something which even at Inception and Elaboration phase too but in construction phase all the scripts automated are executed as suites and overall functionality of the application is validated.
Different suites can be executed like BVT Suite which can be ran to do smoke testing and log can be used to take the decision to accept or reject the build. Regression suites can be ran to ensure that there is no regression impact of the fixes/changes made to the application.
Ensuring the accurate recording of the observed outcome of each test executed in the test cycle.
Ensuring the Test Logs are uniquely and accurately identified, and stored against the correct test cycle or test run.
Actively monitoring for anomalous and erroneous occurrences in the Test Log, and taking appropriate recovery and reporting actions.
Step 5: Analysis & Assessment
Role: Test Analyst
Activity: Determine Test Results
Artifact: Test Results
Phase: Construction, Integration & Transition
This activity is carried out exclusively in Construction phase and elaboration phase. The test logs are analyzed in Rational Test Manager and defects can be raised in ClearQuest.
If the defect raised is because of mistake or miscommunication, the same is fixed and the script is ran again in next iteration but if the failure was because of desirable change / new requirement / change in requirement, then script is modified and ran again in the same iteration and again test logs are analyzed.
Reviewing Test Logs and Change Requests
Actively monitoring for anomalous and erroneous occurrences in the Test Log, investigating and reporting a conclusion
Ensuring the accurate analysis of the observed outcome of each test conducted in the test execution cycle
Ensuring the Test Results are uniquely and accurately identified and recorded against the correct test execution cycle
Role: Test Manager
Activity: Assess and Advocate Quality, Assess and Improve Test Effort
Artifact: Test Evaluation Summary
Phase: Integration & Transition
In construction and transition phase, test manager performs the following activities:
Reviewing the Test Results, change request statistics, and coverage statistics.
Reviewing important Change Request and Issue details.
Presenting an accurate and fair assessment of the software based on the defined Evaluation Mission.
Overall Iterative Automation Process
An iterative process facilitates reuse of project elements because it is easier to identify common parts as they are partially designed or implemented instead of identifying all commonality in the beginning.
Iterative automation process includes the following steps:
Iterative automation has the following advantages:
1. The automated test suite should be executed for all the iterations. Doing this is very important to “Continuously Verify Quality” which is one of the best practice followed by RUP.
2. Doing automation in an iterative way allows flaws to be detected and resolved earlier in the product life cycle. It saves significant amount of time spent to perform manual testing for each and every build in each and every release.
3. Iterative automation provides better view of the quality and helps in correcting the major issues without jeopardizing target costs and schedules.
4. We cannot stop change from being introduced into our project. However we must control how and when changes are introduced into project artifacts, and who introduces the changes. Unified Change Management (UCM) is Rational Software’s approach to managing change in software system development, from requirement to release. “Manage Change” is one more important best practice followed by RUP.
5. Better Overall Quality: Automation scripts developed from an iterative process will be of better overall quality than are the products that result from a conventional sequential process. The script will have been tested several times, improving the quality of testing. At the time of delivery scripts will have been running longer.
6. Stage Containment: Iterative script development helps to contain the defect in the same phase and doesn’t let it go to the last phase. Design bugs can be identified in the elaboration phase and developers don’t need to wait for construction phase to find out design issues.
Automation should be iterative in nature and should be performed right from the beginning of the project. If the project is following RUP where requirements are delivered in multiple phases and multiple iterations are expected, automation can be very handy in reducing the testing effort and cost to a great extent. The effort saved can be better utilized in other areas. This paper can be considered as a reference document by automation testing teams to know what to do and how in different phases of RUP.
Rational Unified Process
Kaner, Cem. 1997. “Improving the Maintainability of Automated Test Suites.” http://www.kaner.com/lawst1.htm
Fewster, Mark and Dorothy Graham. 1999. Software Test Automation, Addison-Wesley.