Programming Paradigms in Test Automation


Regardless of the personal opinions of a few people, the simple fact is that the demand for software testers who can design and develop effective test automation is increasing. Perhaps one reason for the distain by some folks in the industry is due to limitations of the test automation approach they are most familiar with, and they sometimes assume those limitations apply to all types of test automation. However, not all test automation approaches are equal, and there are advantages and disadvantages for any approach.


At its core an automated test case is software code, and similar to the various approaches used in developing product software there are different programming paradigms used to develop test automation such as:



  1. Record and playback automation

  2. Keyword or action-word driven automation

  3. Scripted automation

  4. Procedural automation

  5. Model based automation

 Record and playback automation


The record and playback paradigm simply records sequences of keyboard and mouse events, auto-magically codifies them usually into some proprietary scripting language which can then be replayed (executed) over and over again. There are usually severe limitations to this type of automation and it tends to be extremely fragile requiring constant massaging (re-recording). Although many record/playback paradigm allows ‘test developers’ to modify the scripted actions to some extent, and possibly even incorporate simple yes/no oracles I think many people view the record/playback paradigm as being slightly more useful than trained monkeys in limited situations.


Keyword or action-word driven automation


Keyword or action-words are simple scripts usually in some tabular format that ‘describe’ a sequence of ‘actions’ for the computer to perform. Of course, the key to keywords is the underlying architecture of the tool that interprets the keywords and executes the sequence of events. The beauty of keyword driven testing is that it hides the actual code, and similar to record and playback can be more easily used by business analysts or ‘user domain experts’ hired to into testing roles to automate something. I do see the benefit of keyword driven testing in some limited contexts (especially for companies who rely on business analysts/user domain experts for testing software), but let’s be real…these people aren’t automating anything…they are simply filling out a form that is then feed into a tool that performs the actions as prescribed by the listed instructions. The keyword form does nothing by itself, and the only thing a ‘tester’ has to think about is using the correct key words to sequentially get from point A to point Z for a ‘test.’


Scripted automation (imperative programming)


The primary difference between keyword and scripted automation is the tester actually develops the test in a programming language rather than filling in a form with abstracted key words that drive some automation engine. However, similar to keywords, scripted automation tends to use rudimentary statements of basic instructions that manipulate the software to perform a pre-determined sequence of events as illustrated below.



   1: def test_b_googlenews
   2:   
   3:   #————————————————————————-
   4:   # Test to demonstrate WATIR select from drop-down box functionality
   5:   #
   6:   
   7:   #variables
   8:   test_site = ‘http://news.google.com’
   9:  
  10:   puts ‘## Beginning of test: google news use drop-down box’
  11:   puts ‘  ‘
  12:  
  13:   puts ‘Step 1: go to the google news site: news.google.com’
  14:   $browser.goto(test_site)
  15:   puts ‘  Action: entered ‘ + test_site + ‘ in the address bar.’
  16:  
  17:   puts ‘Step 2: Select Canada from the Top Stories drop-down list’
  18:   $browser.select_list( :index , 1).select(“Canada English”)
  19:   puts ‘  Action: selected Canada from the drop-down list.’
  20:  
  21:   puts ‘Step 3: click the “Go” button’
  22:   $browser.button(:caption, “Go”).click
  23:   puts ‘  Action: clicked the Go button.’
  24:  
  25:   puts ‘Expected Result: ‘
  26:   puts ‘ – The Google News Canada site should be displayed’
  27:  
  28:   puts ‘Actual Result: Check that “Canada” appears on the page by using an assertion’
  29:   assert($browser.text.include?(“Canada”) )
  30:  
  31:   puts ‘  ‘
  32:   puts ‘## End of test: google news selection’
  33:  
  34: end # end of test_googlenews
  35:  
  36:  
  37: def test_c_googleradio
  38:   

Most examples of scripted automation appear as codified versions of a set of steps listed in a less-than-adequately designed manual test case using hard-coded arguments for variables, mindless progression between steps, and simple deterministic oracles. Scripted automation is probably most beneficial for automating specific sub-tasks in “computer assisted testing.” However, scripted automation is usually too prescriptive, and rely heavily on nothing going wrong during the execution of the test case.


Procedural automation (procedural programming)


In procedural automation the testers also develops a test by writing a  series of computational steps to achieve a desired purpose. However unlike scripted automation the procedural automation paradigm generally provides better control flow options during the execution of the automated test case, allows for greater complexity in the design, improves reuse and reduces maintenance through modularity, and can employ both deterministic and heuristic oracles.



   1: // Procedural programming example 
   2:  
   3: static void Main(string[] args)
   4: {
   5:   string logResult = string.Empty;
   6:  
   7:   // Path to the data file passed as a string argument to the test case
   8:   string pictTestData = args[0];
   9:  
  10:   //Stopwatch to measure test case duration
  11:   Stopwatch sw = new Stopwatch();
  12:   sw.Start();
  13:  
  14:   // Launch the AUT
  15:   AutomationElement desktop = AutomationElement.RootElement;
  16:   AutomationElement myAutForm = null;
  17:   Process myProc = new Process();
  18:   myProc.StartInfo.FileName = myConstantAutFileName;
  19:   if (myProc.Start())
  20:   {
  21:     // Polling loop to find AUT window by window property
  22:     int pollCount = 0;
  23:     do
  24:     {
  25:       myAutForm = desktop.FindFirst(TreeScope.Children,
  26:         new PropertyCondition(AutomationElement.AutomationIdProperty,
  27:         myConstantAUTPropertyID));
  28:       pollCount++;
  29:       System.Threading.Thread.Sleep(100);
  30:     }
  31:     while (myAutForm == null && pollCount < 50);
  32:  
  33:     if (myAutForm == null)
  34:     {
  35:       throw new Exception(“Failed to find dialog”);
  36:     }
  37:  
  38:     // Get UI element collection here…
  39:  
  40:     // Call method to read in test data
  41:     string[] testData = ReadTabDelimitedFile(pictTestData);
  42:  
  43:     // iterate through each set of test data (data-driven test example)
  44:     foreach (string test in testData)
  45:     {
  46:       // Call method to execute each set of test data and assign the return
  47:       // value to the logResult variable; Oracle is separate method called 
  48:       // from the test method
  49:       LogResultMethod(ExecuteCombinatorialTestMethod(test));
  50:     }
  51:  
  52:     // close AUT and clean-up
  53:     TimeSpan ts1 = sw.Elapsed;
  54:     // log test case duration…
  55:  
  56:   // Deal with situation if AUT failed to launch
  57: }

Procedural automation can be used for anything from API to GUI automated test cases designed to evaluate functionality (computational logic), non-functional areas such as stress, performance, and security, and also behavioral  . Using a language similar to the programming language removes abstraction layers, and also enables other members of the team (developers) to easily review test cases.


Model based automation


Model based automation is a relatively new automation paradigm, and its complexity is beyond the scope of this single post. Basically, model based automation involves codifying abstracted state machines and state traversals and couples these parts with an automation engine that uses some form of graph traversal logic to drive the system under test between the various state machines identified in the model. In some sense model based automation is similar to exploratory testing because tests are generally not pre-determined or pre-scripted, what constitutes a single test is really hard to describe, and the oracles generally detect errant behavior (or being in an unexpected state). Personally, I think there is tremendous potential in model based automation, but the industry has just begun to scratch the surface of this automation paradigm and it is still largely misunderstood. This automation paradigm requires more complex skill sets of the person designing the test automation such as the ability to abstract important machine states as a model, and encode system behaviors. For more information about model based automation I recommend taking a look at http://research.microsoft.com/en-us/projects/specexplorer.


So, approach which is best?


In my opinion, there may be some limited value in record/playback, keyword, and scripted automation in specific contexts; however, a robust automated test case that will run on multiple environments, multiple languages, and distributed across multiple platforms without rewriting the test for each variation requires well designed tests developed using procedural automation or model based automation approach.

Comments (9)

  1. strazzerj says:

    "Although many record/playback tools allow ‘test developers’ to modify the scripted actions to some extent, and possibly even incorporate simple yes/no oracles I think many people view record/playback tools as being slightly more useful than trained monkeys in limited situations."

    There are very, very few tools that could be accurately categorized as only "record/playback TOOLS".

    Virtually all commercial (and some open-source) test automation tools include a "record/playback FEATURE".

    Despite your dismissal, like all features they have their uses and abuses.

    Many people view QAers in general as "slightly more useful than trained monkeys".  I think it’s unfortunate to see someone like you use such derogatory terms.

  2. I.M.Testy says:

    Hi Joe,

    I agree there are a plethora of tools that include record/playback mechanisms. I should have not used the word ‘tools’ since I am speaking about the record/playback paradigm as an automation approach. (I have edited the post to remove the word tool and replace it with paradigm to clarify my thoughts.)

    I think you misread my statement. I did not infer that testers are slightly more useful than trained monkeys; my statement infered (mine and) the opinion of people I have spoken with whom have used a simple record/playback apporach to test automation view this automation paradigm as slightly more useful than trained monkeys. I think most testers quickly realize the limitations of the record/playback automation paradigm, and also understand the specific situations where it can add value to a project.

     I also think that most professional testers realize that greater success of the record/playback automation paradigm requires the tester to modify the underlying code at least to some extent. But, I would say that modifying the underlying code base obtained from a record playback tool actually moves us closer towards the scripted automation paradigm.

  3. verand says:

    "So, approach which is best?"

    There are many factors that should be considered before choosing the right approach. Here are a few:

       * Analyze the application/product (Web, OS-Based, Technology…etc)

       * Realize what to be tested and what not to be.

       * Go through the requirements

       * Separate the areas as per the modules.

       * Analyze your customer/product needs and thus estimate the development activities. This gives you an idea of number of build releases and testing cycles required.

       * Maintenance (Long-term/Short-term)

       * Budget

    It is not always recommended to have a greater complexity in the design(i respect your thoughts too). But we cannot really get benefited designing and developing Next Generation Automation framework to test a tiny application 🙂

  4. Over at his I. M. Testy blog, BJ Rollison offers succinct definitions of five approaches to automated testing: record and playback automation, keyword or action-word driven automation, scripted automation, procedural automation, and model based automation.

  5. plainplow says:

    Could you point us to resources for learning how to use procedural automation and/or the model based automation approach?

  6. I.M.Testy says:

    Hi PlainPlow,

    I would recommend "Structured Programming" by Dahl, Dijkstra, and Hoare for learning about procedural or structured programming paradigms. Wikipedia also has good pointers to additional info if you search on structured programming and procedural programming. (IMHO, they are essentially synonomous.

    For model based testing, I would refer to the Spec Explorer website at http://research.microsoft.com/en-us/projects/specexplorer/. Also a search on Spec Explorer will provide additional learning resources.

  7. Rajeshkz says:

    I’ve been working on adopting an "object oriented approach" towards test automation. My guess is that this approach falls somewhere between a procedural paradigm and a model based automation paradidm. We’ve achieved remarkable level of producivity with this approach. I’ve blogged about the approach here.

    http://elusivebug.blogspot.com/

    Regards

    Rajesh

  8. I.M.Testy says:

    Hi Rajesh,

    The development approach used to develop a test case may very well be fundamentally different than the development paradigm used to build a testing framework.

    In your post you state, “Since the automation tester will only use the business classes and methods that the automation framework exposes, development of these automation suite is very fast.” I could be wrong, but this sounds very much like key-word driven automation.

    Personally, limiting testers to a set of predefined methods in a script to drive a test framework that acts as an abstraction layer is simply limiting the ability of an automated test. As Dustin, et. el. stated, automation is a development activity. Well designed automation provides some degree of determinism and reduces the mindless scripting of rudimentary sequential actions.

    Also, it seems from your description that your concept of inheritance is primarily based on the ability to reuse code. I have lots of methods that I reused in procedural programming; reuse in of itself does not imply inheritance in the OOP paradigm.

    A simple test for inheritance in an OOP paradigm is the “IS A” test; in other words a sub-component IS A specilized version of its predicessor. For example, an automobile is a specialized version of vehicle. 2-door coup IS A specialized version of automobile.

    And with regards to polymorphism, I am not sure that I would ever want to call code that can issue the same command to a superclass or interface and get different results. In application programming this is important, but in an automated test I have not yet been convinced of its applicable use. (Assuming we agree that polymophism is not the same as randomization.)

Skip to main content