Tuesday, September 4, 2018

TestNG Transformations - A real life case study

In one of the projects I am involved with, common functionality applies to various communication channels such as Sms, Email, Ad-words and Web banners. Naturally in a situation like this the question comes in mind is: "Do we repeat the test cases for every communication channel?" The answer I am always giving is: "It depends on the qualification method used".

For qualification methods such as Manual or Exploratory testing it’s a matter of time and manpower to execute as many times as our communication channels or use pair wise to optimize testing effort while minimizing risk of missing defects. For qualification methods such as Automation testing in order to achieve the the highest coverage we execute as many times as our communication channels.
Automating the test cases can be done in two different ways. The first way is to have four different test scripts differing the channel. This is bad practice because of maintainability issues. The second way of automating is to use a predefined TestNG annotation to execute x times a predefined test script.

My initial approach in using predefined TestNG annotations was to use data providers. The problem in this approach was that I needed to set up the communication channel in the @BeforeClass section and the data providers should be used in @Test section of the test script.
The second approach in using predefined TestNG annotations was to use the Factory functionality. The problem with this approach is that TestNG does not respect the dependency of each test step in a single thread. BeforeClass will be executed as many times as specified in the factory then Test Step 1 as many times and so on so forth in a single web session.
One fitted solution, out of the predefined TestNG annotations scope, is to create a base class implementing the test. The base test should hold the aforementioned communication channels as a parameter. The parameters should be passed in an TestNG parameter annotation in @BeforeClass section and set before the execution in the testng.xml parameters section.
@BeforeClass(alwaysRun = true)
@Parameters(["test.channel"]) <parameter name="test.channel" value="Email" /> For maintainability and traceability purposes I have created 4 different empty test scripts (traceability) extending the base test class (maintainability).
This scenario is ideal if the functionality of the software is identical for every communication channel, but if there is a slight change in a test step what do we do? One easy solution is to skip the test step. To do this in TestNG use SkipException. Keep in mind in the reporting the skipped test steps shown as skipped, and this can be confusing in cases there are failures in the @BeforeClass section. A more clean approach to skip the test step will be to set the @Test(enabled=false) flag of the annotation, but we need to dynamically change its value in the context of a test suite. This can me accomplished by using the ITestAnnotation provided by TestNG like:
public void transform(ITestAnnotation annotation, Class arg1, Constructor arg2, Method testMethod) {
    if(campaignTestChannel.equals("Email")) {

The above piece of code implements a functionality that if the channel is the Email all test steps containing enabled flag should be executed.
As a conclusion given the aforementioned situation in which multiple script execution is needed the automation is the way to go. To achieve maintainability avoid replicating test scripts and choose one of the methods (data providers, factories or extending a base class) that fits your needs. If you need to exclude test steps from your base class and don’t care about reporting go for the skipExceptiuon method, other wise dynamically on runtime transform your annotations.
Read More

Tuesday, April 15, 2014

Concurrency testing made easy

Having a multi tenant application under test, concurrency testing is the next step of the testing cycle. Testing using concurrent users takes a lot of time and usually ends up in defects not fully reproducible because of timing issues (synchronization) thus making automation of these tests a must.

Automating concurrent tests using selenium / webdriver always brings up one of the most common question in automation testing; Do I need a new window or open a new tab? The correct question should be: how do I open a new session? And here is the tricky part of the tests. If you open a new window or tab you will carry the session from the window you started from. Some sites will suggest, “clearing the cookies” but this involves the whole browser not a specific window thus the newly assigned session id is the same on all existing windows and tabs.

The solution to the above explained problem is to start two or more selenium / webdriver sessions. To simplify these procedures we use a specific Stevia functionality in order to instantiate 2 webdriver controllers. The instantiation is declared in a spring xml file we call test-controllers and includes the following:

<bean id="webDriverControllerSession1"  class="controllers.ControllerSession1" scope="prototype"/>
<bean id="webdriver-s1-driver" class="com.persado.oss.quality.stevia.selenium.core.controllers.registry.Driver"
<bean id="webDriverControllerSession2"  class="controllers.ControllerSession2" scope="prototype"/>
<bean id="webdriver-s2-driver" class="com.persado.oss.quality.stevia.selenium.core.controllers.registry.Driver"

Note: The spring framework schema p should be included in the xml file, and the controller factory implementation should be created in the appropriate packages.

Having the two controllers at hand we can use a Stevia annotation @RunsWithController(controller=controllers.ControllerSession1.class) before any @BeforeClass or @Test annotations.

The next big problem at hand is how to synchronize the sessions in order to have control of the test execution; synchronization in java is quite advanced, there are a lot of techniques, to mention a few: Cyclic barrier and Count Down Latch. These techniques involve heavy coding and it is an overkill to use them. The most appropriate way to synchronize the execution of tests for different controllers is to use the dependency attribute provided by testNG.

Lets use a specific example to put things into perspective:

 Lets assume there is a phonebook application where you can create a custom phone book and assign contacts. You can edit or delete the phonebook as long as it is empty. So lets assume that in session 1 a user created a phonebook and in session 2 a second user assigns a contact. In session 1 the user is not aware that an entry has been added to the phonebook and wants to edit the phonebook name. As soon as the user presses the edit button he / she is presented with an error message stating that you are not allowed to edit on non-empty phonebook.
In the aforementioned scenario there are 2 clear preconditions, as follows:
User in session 1 creates phonebook
User in session 2 adds a contact in phonebook
And a test step with the following action and expected results
User 1 presses the edit button => User is presented with error message

The synchronization sequence is shown below:

For the two preconditions the synchronization could be done using testNG BeforeClass annotations as follows:

public void createPreconditionsSession1(String parallelTestsFlag) {
     // User in session 1 creates phonebook

@BeforeClass(dependsOnMethods = "createPreconditionsSession1")
public void createPreconditionsSession2() throws Exception {
    // User in session 2 adds a contact in phonebook

@Test(alwaysRun = true, description = "Edit phonebook name")
public void testStep_1() {
    // User presses the edit button => User is presented with error message
TestNG will execute createPreconditionsSession1 -> createPreconditionsSession2 -> testStep_1

The next big question that comes to mind with this implementation is how do I run preconditions on different sessions for each step? Lacking a before specific test method and the problem that the usage of a Test annotation is adding noise (step is counted to results as pass even if it is a precondition) we need an extra annotation to execute the precondition. The latest SNAPSHOT of Stevia solves the above problem by introducing an annotation to allow execution of a series of pre and post conditions on a Test annotated step. Lets use the previous example and add a step to our test verifying the story “ User from session 1 created a new phonebook. User in session2 adds a contact. User in session 1 presses the delete button for created phonebook.

@Preconditions(controller=controllers.ControllerSession2.class,value={"test3Pre1", "test3Pre2"}) 
@RunsWithController(controllers. ControllerSession1.class)
@Test(alwaysRun = true, description = "", dependsOnMethods = {"testStep_1"})
public void testStep_2() {
    // User presses the delete button => User is presented with error message
public void test3Pre1(){
    // User in session 2 adds a contact on created bucket

Note: The second phonebook (delete action test step) is created in the preconditions section along with the first phonebook (edit action test step).

So having these powerful annotations in stevia you can manipulate any test script execution without sacrificing any of the testing standards. As a result the quality is amplified to a new level by having test step pre and post conditions independent of your testing framework (Junit or TestNG).

Read More

Friday, April 4, 2014

Stevia enhancements and documentation


Lately, we've been using Stevia a lot; we're also extending it as much as possible (no promises, but phantomjs is in our crosshairs lately). By talking to the community, it has come to our attention that people are having difficulty to start up a Stevia project. To that extend, we've created some content to give you a boost:

  1. Our SNAPSHOT javadoc is regularly published in Github Pages,
  2. We've setup Travis continuous integration and our build is currently
  3. We have a brand-new, all-shining Stevia Quick Start Guide.

We hope that the above additions will give you a helping hand. If not, feel free to send us your bugs/comments via the issue tracker in Github. We're listening.
Read More

Wednesday, February 19, 2014

Page Objects are not enough

The Page Object design pattern is one of the most popular patterns in test automation just because it promotes test maintenance and unifies the way QA teams work. As the pattern dictates, a page object is the representation of an html page using objects. The objects contain the html element addresses (Xpath, Css or DOM) and the user actions on them as methods (press, input, select, etc.).

Although the page objects pattern is widely accepted, our work experience showed that it is inadequate in fully describing an html page because it lacks the ability to represent the business logic of the page. In order to solve this problem we use a complimentary entity to the page object, the business object.

The business object is a class containing the business logic behind the page. An example of html page business logic is the validation of the mandatory fields of a form. The page object by its definition could not contain the aforementioned logic it can only carry the info that pressing the submitting button an error is raised without reasoning. The business object compliments the page object in having a method for trying to submit a form without the mandatory fields completed and an error is raised. The complimenting nature of the business object stands because the business objects uses the page object’s methods to assemble the logic thus its methods are a collection of page object methods.

The breakthrough of this approach in the representation of the html page is that it keeps a clear separation of the logic from the html elements introducing an abstraction layer for the business object thus separating future changes in the page logic from its locators. In addition it may introduce reusability (a business object may exist in more than one pages hence page objects). Using the previous example a possible subtraction of a mandatory field does not affect the page object only the business one thus the change is marked in a single place.

In a later post I will try to expand on how to implement our business objects and what we do in special cases such as repeating pages with in the software under test.

Read More

Tuesday, February 11, 2014

Gray box testing is the way to adapt for automation testing

When we finished the design of our objects for setting up the preconditions of an automated test case we showed it to one of our developers and his first reaction was “it is the same with the app’s domain model”. This comment made me think that knowing the application internals would be extremely useful in developing correct test scripts.  Knowing this had me thinking that the gray box testing would be more appropriate in our automation tests than black box, thus a switch to that direction is worth considering.

Why is gray box more appropriate in automation and not manual testing? Because any automation tester will have the technical skills to understand the internals of a software application and to prepare the appropriate testing code to interface with available hooks or services, something that a manual (non-programming-savvy) tester would not have.

As a case study I used the current project I was working for. In this project there was a front end used as an emulator of an external system. The front-end app is retrieving data from a database. Black box testing ignores were the front-end app gets its data as long as the displayed data is correct. While this is acceptable for any functional test case, the big picture is revealing shady areas of operations. For example for large amount of data we need pagination (more effort and man hours out of the developers), in parallel testing there are significant delays due to concurrency issues (front-end was not designed initially for heavy duty traffic) and in some cases there was data loss misleading the testing team into opening false defects.

Switching in gray box testing to identify the internals (understanding how data is written to database and how it is queried by the front-end) was simple with the usage of some Spring connectors to read the database. The advantages were the following:

  •           No more maintenance for the front end emulator
  •           Parallel testing speed up
  •           Single point of failure (database not emulator)

Switching to gray box testing made our scripts more robust and lifted the maintenance effort from the development team therefore becoming more suitable for our needs.  Expanding the gray box testing technique allowed us to start using services like JMX to communicate with the app in more accurate and direct way, leading in testing various paths inside the app’s code that were not exposed to us initially. This assisted us in revealing not functional but bottleneck-related bugs of the software.

Gray box testing is the way to go for any automation tester because it can reveal potential bottlenecks inside the code before the performance test does and because it simplifies the way test scripts are created. Gray box testing pushes you to learn more tools to hook into the app thus expanding your software development skills thus becoming a better automation engineer.

Read More

Tuesday, November 5, 2013

During the automation regression circles there are some defects that are carried over through the iterations.
So these tests appear as failed to ReportNG. Due to the fact that the new tests may introduce new defects that appear as failed to the report as well, some time is difficult and time consuming to distinguish between the old and the new defects.In this post we will describe a technique to illustrate the tests with the old defects in a separate column in the ReportNG.

The first step is to 'mark' somehow the test steps in which the known defects appear.There are a lot of ways to do this using features of the TestNG framework.One way to accomplish this is to use the expectedExceptions attribute of the @Test class.

The expectedExceptions includes the list of exceptions that a test method is expected to throw. If no exception or a different than one on this list is thrown, this test will be marked a failure.Assuming that the testStep with the known defect fails in an assertion point, the code should be like this:

public void testStepX(){
//Assertion point that fails
So after this code is executed the testStep with the known defect is considered passed.

The second step is to collect all these tests through a listener or a setup class.
So if you have a Super class that all your test classes extends from you can include the following code in this class. (This technique is the native dependency injection of the TestNG framework: http://testng.org/doc/documentation-main.html#native-dependency-injection)

@AfterMethod(alwaysRun = true)
protected void ignoreResultUponExpectedException(ITestResult result) {
if (result.isSuccess() && result.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) {
result.setThrowable(new Throwable("MARKED AS TEST WITH KNOWN DEFECT!!"));
result.getTestContext().getSkippedTests().addResult(result, result.getMethod());
An alternative way is to do this through a TestNG Listener:

import org.testng.IInvokedMethod;
import org.testng.IInvokedMethodListener;
import org.testng.ITestResult;
public class MyListener implements IInvokedMethodListener{

public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {


public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
if (testResult.isSuccess() && testResult.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) 
testResult.setThrowable(new Throwable("MARKED AS TEST WITH KNOWN DEFECT!!"));
testResult.getTestContext().getSkippedTests().addResult(testResult, testResult.getMethod());

So to the moment the tests with the known defects seems as skipped (yellow color in ReportNG) with a specific exception message.

One may should stop here since the tests with the known defects are now in the skipped column.Nevertheless this also may be confusing because there are cases that we have additional skipped tests.This is done when a configuration method of the TestNG fails (e.g when @BeforeClass fails).

The point here is that if we want to seperate further the tests with the known defects we must make changes to the native ReportNG code and build again the ReportNG.
The code is available under: https://github.com/dwdyer/ReportNG/downloads
So we must import the ReportNG project in our IDE (Eclipse or NetBeans or IntelliJ IDEA).It is a good idea to convert it to maven project in order to build more easily the jar that we will use.

The basic ReportNG class that is responsible for gathering and post process the TestNG results is the HTMLReporter.java class.The changes in this class is in createResults method in order to put the tests with the known defects in a seperate key for the Velocity Context. ReportNG uses the apache velocity context framework in order to generate the results. For more info about Apache Velocity you can visit: http://velocity.apache.org/

public static final String KNOWN_DEFECTS_TESTS_KEY = "knownDefects";

private void createResults(List<ISuite> suites, File outputDirectory, boolean onlyShowFailures) throws Exception
int index = 1;
for (ISuite suite : suites){
   int index2 = 1;
   for (ISuiteResult result : suite.getResults().values()){
     boolean failuresExist = result.getTestContext().getFailedTests().size() > 0      || result.getTestContext().getFailedConfigurations().size() > 0;
      if (!onlyShowFailures || failuresExist){
         IResultMap skippedTests = result.getTestContext().getSkippedTests();
         IResultMap knownDefects = new ResultMap();    
         for(ITestResult tr:skippedTests.getAllResults()){
         if (tr.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) {
           knownDefects.addResult(tr, tr.getMethod());

VelocityContext context = createContext();
context.put(RESULT_KEY, result);
context.put(FAILED_CONFIG_KEY, sortByTestClass(result.getTestContext().getFailedConfigurations()));
context.put(SKIPPED_CONFIG_KEY, sortByTestClass(result.getTestContext().getSkippedConfigurations()));
context.put(FAILED_TESTS_KEY, sortByTestClass(result.getTestContext().getFailedTests()));
context.put(KNOWN_DEFECTS_TESTS_KEY, sortByTestClass(knownDefects));
context.put(SKIPPED_TESTS_KEY, sortByTestClass(skippedTests));
context.put(PASSED_TESTS_KEY, sortByTestClass(result.getTestContext().getPassedTests()));
String fileName = String.format("suite%d_test%d_%s", index, index2, RESULTS_FILE);
generateFile(new File(outputDirectory, fileName),RESULTS_FILE + TEMPLATE_EXTENSION,context);
The final changes should be done in overview.html.vm,reportng.properties and reportng.css files:

In reportng.properties add the property:
knownDefects=Known Defects
In reportng.css add the style for the known defects tests.(I have set the color as pink)
.knownDefects            {background-color: #ff3399;}
.test .knownDefects      {background-color: #ff99cc;}
Finally in the overview.html.vm we will make the most changes in order to illustrate the tests with the known defects in a separate column.(See the changes and additions underlined).If you perform these changes and build a ReportNG.jar (through mvn install) and use this jar to your project's classpath in order to generate the results you will have a seperate column with the known defects.We must notice here that when the defect is fixed the test will fail with a message like: expected exception was ... but... 
This is an indication that you must remove the expectedExceptions attribute from this test step.You can below a preview of how the report should be.

#foreach ($suite in $suites)
<table class="overviewTable">
  #set ($suiteId = $velocityCount)
  #set ($totalTests = 0)
  #set ($totalPassed = 0)
  #set ($totalSkipped = 0)
  #set ($totalFailed = 0)
  #set ($totalKnownDefects = 0)
  #set ($totalFailedConfigurations = 0)
    <th colspan="8" class="header suite">
      <div class="suiteLinks">
        #if (!$suite.invokedMethods.empty)
        ##<a href="suite${suiteId}_chronology.html">$messages.getString("chronology")</a>
        #if ($utils.hasGroups($suite))
        <a href="suite${suiteId}_groups.html">$messages.getString("groups")</a>
  <tr class="columnHeadings">

  #foreach ($result in $suite.results)
  #set ($notPassedTests = $result.testContext.skippedTests.size() + $result.testContext.failedTests.size())
  #set ($total = $result.testContext.passedTests.size() + $notPassedTests)
  #set ($totalTests = $totalTests + $total)
  #set ($totalPassed = $totalPassed + $result.testContext.passedTests.size())
  #set ($totalKnownDefects = $totalKnownDefects + $utils.getKnownDefects($result.testContext.skippedTests).size())
  #set ($totalSkipped = $totalSkipped + $result.testContext.skippedTests.size() -$utils.getKnownDefects($result.testContext.skippedTests).size())
  #set ($totalFailed = $totalFailed + $result.testContext.failedTests.size())
  #set ($totalFailedConfigurations = $totalFailedConfigurations + $result.testContext.failedConfigurations.size())
  #set ($failuresExist = $result.testContext.failedTests.size()>0 || $result.testContext.failedConfigurations.size()>0)

  #if (($onlyReportFailures && $failuresExist) || (!$onlyReportFailures))
  <tr class="test">
   <td class="test">
    <a href="suite${suiteId}_test${velocityCount}_results.html">${result.testContext.name}</a>

    <td class="duration">
    #if ($result.testContext.passedTests.size() > 0)=
    <td class="passed number">$result.testContext.passedTests.size()</td>
    <td class="zero number">0</td>

    #if ($result.testContext.skippedTests.size() - $utils.getKnownDefects($result.testContext.skippedTests).size() > 0)
    #set ($skipped = $result.testContext.skippedTests.size() - $utils.getKnownDefects($result.testContext.skippedTests).size())

    <td class="skipped number">$skipped</td>
    <td class="zero number">0</td>

    #if ($result.testContext.failedTests.size() > 0)
    <td class="failed number">$result.testContext.failedTests.size()</td>
    <td class="zero number">0</td>

    #if ($utils.getKnownDefects($result.testContext.skippedTests).size() > 0)
    <td class="knownDefects number">$utils.getKnownDefects($result.testContext.skippedTests).size()</td>
    <td class="zero number">0</td>

    #if ($result.testContext.failedConfigurations.size() > 0)
    <td class="failed number">$result.testContext.failedConfigurations.size()</td>
    <td class="zero number">0</td>

    <td class="passRate">
      #if ($total > 0)
      #set ($passRate = (($total - $notPassedTests) * 100 / $total))


    <tr class="suite">
    <td colspan="2" class="totalLabel">$messages.getString("total")</td>

    #if ($totalPassed > 0)
    <td class="passed number">$totalPassed</td>
    <td class="zero number">0</td>
    #if ($totalSkipped > 0)
    <td class="skipped number">$totalSkipped</td>
    <td class="zero number">0</td>

    #if ($totalFailed > 0)
    <td class="failed number">$totalFailed</td>
    <td class="zero number">0</td>
    #if ($totalKnownDefects > 0)
    <td class="knownDefects number">$totalKnownDefects</td>
    <td class="zero number">0</td>
    #if ($totalFailedConfigurations > 0)
    <td class="failed number">$totalFailedConfigurations</td>
    <td class="zero number">0</td>

    <td class="passRate suite">
      #if ($totalTests > 0)
      #set ($passRate = (($totalTests - $totalSkipped - $totalFailed -$totalKnownDefects) * 100 / $totalTests))


Read More

Tuesday, September 17, 2013

Get Table Data

One of the most common tasks in our project is to retrieve data from a table in order to assert. With this post I will try to describe a unified way to get the required data from any table with a specific format so my assertions are well defined.
The assertion points to be well defined I usually prefer to have my actual data in the form of a Map(key,value) so my assertions are in the form
I selected the key value for my map to be the value of the first column with value a second map containing as keys the names of the columns and values the values of the columns.
The aforementioned implementation for the key value pairs where chosen to be the table values instead of the table indexes for maintainability purposes. Maintainability wise a column addition or an non shorted table the index will return the wrong result while the value not.
The resulting assertions look like:
The first thing we need to do in order to construct our map is to get the number of rows and columns of the table as follows:
rows = selenium.getCssCount("css=table tbody tr").intValue()
columns = selenium.getCssCount("css=table tbody tr td").intValue()
With the number of rows and columns at hand the next step is to retrieve the names of the columns from the table header as follows in groovy:
public List<String> getTableColumnNames(){  
   def headerNames=[]  
   (1..selenium.getCssCount("css=table thead tr th").intValue()).each{columns->  
    if(!selenium.getText("css=table thead tr th" + ":nth-child(" + columns + ")").isEmpty()){  
      headerNames << selenium.getText("css=table thead tr th" + ":nth-child(" + columns + ")")  
   return headerNames
Having the column names the next step is to construct the desired map as follows in groovy:
public HashMap<String, HashMap<String, String>> getTableInfo() {  
   def TableMap=[:]    
   def columnNames = getTableColumnNames()  
   (1..selenium.getCssCount("css=table tbody tr").intValue()).each{row->  
     def columnMap=[:]
     (2..selenium.getCssCount("css=table tbody tr td").intValue()).each{column->  
      columnMap.put(columnNames[column-1],controller().getText(componentName + ":nth-child("+row+") *:nth-child("+column+")"))  
    TableMap.put(controller().getText(componentName + ":nth-child(" + row + ") td:nth-child(1)"),columnMap)  
   return TableMap;  
The above implementation can be found embedded in Stevia, enriched with code detecting your locator style (Xpath, Css or Id).

The above implementation of the table scan could be altered to accept only td as columns by altering the column map to get
instead of
Stevia includes similar methods such as: 
  • getTableInfoAsList 
  • getTableElements2DArray
  • getTableElementTextUnderHeader
Read More