Tuesday, November 5, 2013


During the automation regression circles there are some defects that are carried over through the iterations.
So these tests appear as failed to ReportNG. Due to the fact that the new tests may introduce new defects that appear as failed to the report as well, some time is difficult and time consuming to distinguish between the old and the new defects.In this post we will describe a technique to illustrate the tests with the old defects in a separate column in the ReportNG.

The first step is to 'mark' somehow the test steps in which the known defects appear.There are a lot of ways to do this using features of the TestNG framework.One way to accomplish this is to use the expectedExceptions attribute of the @Test class.

The expectedExceptions includes the list of exceptions that a test method is expected to throw. If no exception or a different than one on this list is thrown, this test will be marked a failure.Assuming that the testStep with the known defect fails in an assertion point, the code should be like this:

@Test(expectedExceptions=AssertionError.class)
public void testStepX(){
//Code
//Assertion point that fails
}
So after this code is executed the testStep with the known defect is considered passed.

The second step is to collect all these tests through a listener or a setup class.
So if you have a Super class that all your test classes extends from you can include the following code in this class. (This technique is the native dependency injection of the TestNG framework: http://testng.org/doc/documentation-main.html#native-dependency-injection)

@AfterMethod(alwaysRun = true)
protected void ignoreResultUponExpectedException(ITestResult result) {
if (result.isSuccess() && result.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) {
result.getTestContext().getPassedTests().removeResult(result.getMethod());
result.setThrowable(new Throwable("MARKED AS TEST WITH KNOWN DEFECT!!"));
result.getTestContext().getSkippedTests().addResult(result, result.getMethod());
}
}
An alternative way is to do this through a TestNG Listener:

import org.testng.IInvokedMethod;
import org.testng.IInvokedMethodListener;
import org.testng.ITestResult;
public class MyListener implements IInvokedMethodListener{

@Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {

}

@Override
public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
if(method.isTestMethod()){
if (testResult.isSuccess() && testResult.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) 
{
testResult.getTestContext().getPassedTests().removeResult(testResult.getMethod());
testResult.setThrowable(new Throwable("MARKED AS TEST WITH KNOWN DEFECT!!"));
testResult.getTestContext().getSkippedTests().addResult(testResult, testResult.getMethod());
}
}
}
}

So to the moment the tests with the known defects seems as skipped (yellow color in ReportNG) with a specific exception message.

One may should stop here since the tests with the known defects are now in the skipped column.Nevertheless this also may be confusing because there are cases that we have additional skipped tests.This is done when a configuration method of the TestNG fails (e.g when @BeforeClass fails).

The point here is that if we want to seperate further the tests with the known defects we must make changes to the native ReportNG code and build again the ReportNG.
The code is available under: https://github.com/dwdyer/ReportNG/downloads
So we must import the ReportNG project in our IDE (Eclipse or NetBeans or IntelliJ IDEA).It is a good idea to convert it to maven project in order to build more easily the jar that we will use.

The basic ReportNG class that is responsible for gathering and post process the TestNG results is the HTMLReporter.java class.The changes in this class is in createResults method in order to put the tests with the known defects in a seperate key for the Velocity Context. ReportNG uses the apache velocity context framework in order to generate the results. For more info about Apache Velocity you can visit: http://velocity.apache.org/

public static final String KNOWN_DEFECTS_TESTS_KEY = "knownDefects";

@SuppressWarnings("deprecation")
private void createResults(List<ISuite> suites, File outputDirectory, boolean onlyShowFailures) throws Exception
{
int index = 1;
for (ISuite suite : suites){
   int index2 = 1;
   for (ISuiteResult result : suite.getResults().values()){
     boolean failuresExist = result.getTestContext().getFailedTests().size() > 0      || result.getTestContext().getFailedConfigurations().size() > 0;
      if (!onlyShowFailures || failuresExist){
         IResultMap skippedTests = result.getTestContext().getSkippedTests();
         IResultMap knownDefects = new ResultMap();    
         for(ITestResult tr:skippedTests.getAllResults()){
         if (tr.getMethod().getMethod().getDeclaredAnnotations()[0].toString().contains("expectedExceptions=[class")) {
           skippedTests.removeResult(tr.getMethod());
           knownDefects.addResult(tr, tr.getMethod());
           }

          }
VelocityContext context = createContext();
context.put(RESULT_KEY, result);
context.put(FAILED_CONFIG_KEY, sortByTestClass(result.getTestContext().getFailedConfigurations()));
context.put(SKIPPED_CONFIG_KEY, sortByTestClass(result.getTestContext().getSkippedConfigurations()));
context.put(FAILED_TESTS_KEY, sortByTestClass(result.getTestContext().getFailedTests()));
context.put(KNOWN_DEFECTS_TESTS_KEY, sortByTestClass(knownDefects));
context.put(SKIPPED_TESTS_KEY, sortByTestClass(skippedTests));
context.put(PASSED_TESTS_KEY, sortByTestClass(result.getTestContext().getPassedTests()));
String fileName = String.format("suite%d_test%d_%s", index, index2, RESULTS_FILE);
generateFile(new File(outputDirectory, fileName),RESULTS_FILE + TEMPLATE_EXTENSION,context);
    }
 ++index2;
  }
 ++index;
 }
}
The final changes should be done in overview.html.vm,reportng.properties and reportng.css files:

In reportng.properties add the property:
knownDefects=Known Defects
In reportng.css add the style for the known defects tests.(I have set the color as pink)
.knownDefects            {background-color: #ff3399;}
.test .knownDefects      {background-color: #ff99cc;}
Finally in the overview.html.vm we will make the most changes in order to illustrate the tests with the known defects in a separate column.(See the changes and additions underlined).If you perform these changes and build a ReportNG.jar (through mvn install) and use this jar to your project's classpath in order to generate the results you will have a seperate column with the known defects.We must notice here that when the defect is fixed the test will fail with a message like: expected exception was ... but... 
This is an indication that you must remove the expectedExceptions attribute from this test step.You can below a preview of how the report should be.

#foreach ($suite in $suites)
<table class="overviewTable">
  #set ($suiteId = $velocityCount)
  #set ($totalTests = 0)
  #set ($totalPassed = 0)
  #set ($totalSkipped = 0)
  #set ($totalFailed = 0)
  #set ($totalKnownDefects = 0)
  #set ($totalFailedConfigurations = 0)
  <tr>
    <th colspan="8" class="header suite">
      <div class="suiteLinks">
        #if (!$suite.invokedMethods.empty)
        ##<a href="suite${suiteId}_chronology.html">$messages.getString("chronology")</a>
        #end
        #if ($utils.hasGroups($suite))
        <a href="suite${suiteId}_groups.html">$messages.getString("groups")</a>
        #end       
      </div>
      ${suite.name}
    </th>
  </tr>
  <tr class="columnHeadings">
    <td>&nbsp;</td>
    <th>$messages.getString("duration")</th>
    <th>$messages.getString("passed")</th>
    <th>$messages.getString("skipped")</th>
    <th>$messages.getString("failed")</th>
    <th>$messages.getString("knownDefects")</th>
    <th>$messages.getString("failedConfiguration")</th>
    <th>$messages.getString("passRate")</th>
  </tr>

  #foreach ($result in $suite.results)
  #set ($notPassedTests = $result.testContext.skippedTests.size() + $result.testContext.failedTests.size())
  #set ($total = $result.testContext.passedTests.size() + $notPassedTests)
  #set ($totalTests = $totalTests + $total)
  #set ($totalPassed = $totalPassed + $result.testContext.passedTests.size())
  #set ($totalKnownDefects = $totalKnownDefects + $utils.getKnownDefects($result.testContext.skippedTests).size())
  #set ($totalSkipped = $totalSkipped + $result.testContext.skippedTests.size() -$utils.getKnownDefects($result.testContext.skippedTests).size())
  #set ($totalFailed = $totalFailed + $result.testContext.failedTests.size())
  #set ($totalFailedConfigurations = $totalFailedConfigurations + $result.testContext.failedConfigurations.size())
  #set ($failuresExist = $result.testContext.failedTests.size()>0 || $result.testContext.failedConfigurations.size()>0)

  #if (($onlyReportFailures && $failuresExist) || (!$onlyReportFailures))
  <tr class="test">
   <td class="test">
    <a href="suite${suiteId}_test${velocityCount}_results.html">${result.testContext.name}</a>

    </td>
    <td class="duration">
      $utils.formatDuration($utils.getDuration($result.testContext))s
    </td>
    #if ($result.testContext.passedTests.size() > 0)=
    <td class="passed number">$result.testContext.passedTests.size()</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($result.testContext.skippedTests.size() - $utils.getKnownDefects($result.testContext.skippedTests).size() > 0)
    #set ($skipped = $result.testContext.skippedTests.size() - $utils.getKnownDefects($result.testContext.skippedTests).size())

    <td class="skipped number">$skipped</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($result.testContext.failedTests.size() > 0)
    <td class="failed number">$result.testContext.failedTests.size()</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($utils.getKnownDefects($result.testContext.skippedTests).size() > 0)
    <td class="knownDefects number">$utils.getKnownDefects($result.testContext.skippedTests).size()</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($result.testContext.failedConfigurations.size() > 0)
    <td class="failed number">$result.testContext.failedConfigurations.size()</td>
    #else
    <td class="zero number">0</td>
    #end

    <td class="passRate">
      #if ($total > 0)
      #set ($passRate = (($total - $notPassedTests) * 100 / $total))
      $passRate%
      #else
      $messages.getString("notApplicable")
      #end
    </td>

  </tr>
  #end
  #end

    <tr class="suite">
    <td colspan="2" class="totalLabel">$messages.getString("total")</td>

    #if ($totalPassed > 0)
    <td class="passed number">$totalPassed</td>
    #else
    <td class="zero number">0</td>
    #end
    #if ($totalSkipped > 0)
    <td class="skipped number">$totalSkipped</td>
    #else
    <td class="zero number">0</td>
    #end

    #if ($totalFailed > 0)
    <td class="failed number">$totalFailed</td>
    #else
    <td class="zero number">0</td>
    #end
  
    #if ($totalKnownDefects > 0)
    <td class="knownDefects number">$totalKnownDefects</td>
    #else
    <td class="zero number">0</td>
    #end 
    #if ($totalFailedConfigurations > 0)
    <td class="failed number">$totalFailedConfigurations</td>
    #else
    <td class="zero number">0</td>
    #end

    <td class="passRate suite">
      #if ($totalTests > 0)
      #set ($passRate = (($totalTests - $totalSkipped - $totalFailed -$totalKnownDefects) * 100 / $totalTests))
      $passRate%
      #else
      $messages.getString("notApplicable")
      #end
    </td>

  </tr>
</table>



Read More

Tuesday, September 17, 2013

Get Table Data

One of the most common tasks in our project is to retrieve data from a table in order to assert. With this post I will try to describe a unified way to get the required data from any table with a specific format so my assertions are well defined.
The assertion points to be well defined I usually prefer to have my actual data in the form of a Map(key,value) so my assertions are in the form
Assert.assertEquals(data.get(key),expected_value)
I selected the key value for my map to be the value of the first column with value a second map containing as keys the names of the columns and values the values of the columns.
Map(column_n_value:Map(column_2_name:column_2_value,…,column_n_name:column_n_value))
The aforementioned implementation for the key value pairs where chosen to be the table values instead of the table indexes for maintainability purposes. Maintainability wise a column addition or an non shorted table the index will return the wrong result while the value not.
The resulting assertions look like:
Assert.assertEquals(data.get(row_1_value).get(column_2_name),expected_value)
Assert.assertEquals(data.get(row_1_value).get(column_3_name),expected_value)
The first thing we need to do in order to construct our map is to get the number of rows and columns of the table as follows:
rows = selenium.getCssCount("css=table tbody tr").intValue()
columns = selenium.getCssCount("css=table tbody tr td").intValue()
With the number of rows and columns at hand the next step is to retrieve the names of the columns from the table header as follows in groovy:
public List<String> getTableColumnNames(){  
   def headerNames=[]  
   (1..selenium.getCssCount("css=table thead tr th").intValue()).each{columns->  
    if(!selenium.getText("css=table thead tr th" + ":nth-child(" + columns + ")").isEmpty()){  
      headerNames << selenium.getText("css=table thead tr th" + ":nth-child(" + columns + ")")  
    }
   return headerNames
}
Having the column names the next step is to construct the desired map as follows in groovy:
public HashMap<String, HashMap<String, String>> getTableInfo() {  
   selenium.waitForElement(componentName);  
   def TableMap=[:]    
   def columnNames = getTableColumnNames()  
   (1..selenium.getCssCount("css=table tbody tr").intValue()).each{row->  
     def columnMap=[:]
     (2..selenium.getCssCount("css=table tbody tr td").intValue()).each{column->  
      columnMap.put(columnNames[column-1],controller().getText(componentName + ":nth-child("+row+") *:nth-child("+column+")"))  
    }  
    TableMap.put(controller().getText(componentName + ":nth-child(" + row + ") td:nth-child(1)"),columnMap)  
   }  
   return TableMap;  
 }
The above implementation can be found embedded in Stevia, enriched with code detecting your locator style (Xpath, Css or Id).

The above implementation of the table scan could be altered to accept only td as columns by altering the column map to get
td:nth-child("+column+")
instead of
*nth-child("+column+")
Stevia includes similar methods such as: 
  • getTableInfoAsList 
  • getTableElements2DArray
  • getTableElementTextUnderHeader
Read More

Thursday, August 29, 2013

Regression Suites

In our agile project we utilized regression testing with test automation and continuous executions of our automated test scripts. This process served us well while the test execution lasted less than 4 hours but started to create major problems when the total number of test scripts increased and the execution time exceeded 1 day. A quick relief to our problem came with introduction of parallel execution of test scripts but still in cases were the management wanted a quick answer to “Do we deploy on production or not?” it wasn’t enough.

One way to design effective regression suites, to ensure the continuation of business functions, is to follow the rule:
Regression test suite!= sum(Functional test cases)
The purpose of the functional tests is to explore the behavior of specific business functions and highlight corner cases while the purpose of regression tests is to give an overall overview of the entire system. In this way the design of the regression suite should include test scripts inspecting the end-to-end business functions that a system encapsulates.

For the purpose of our project (agile environment) the regression suites were designed with the hypothesis that the GUI, boundary and GUI negative tests should be separated from the runtime tests because:
  1.      Changes in the code in boundaries are rare
  2.      GUI is indirectly tested (application its self is used for the preconditions of each test case)
This separation leads us automatically in designing different regression suites containing these special categories.

More over, a vertical separation according to execution frequency is needed. In an agile environment in a 2 weeks sprint, 4 sprints release circle, effective execution regression times could be at the end of each sprint, in every other sprint and at the end of the release. The schedule of each regression suite could be decided according to how often this part of the code is changed. For example a boundary test is highly unusual to change thus a planned execution time could be only once per release and preferably one sprint before the release sprint (latest spring usually reserved for sprint new features).



As mentioned before at the end of each sprint the new features suite should executed additively. For example in sprint 1 we have 5 features and in sprint 2 we have 3 features. The regression suite should have cases testing the 5 features (end of sprint 1) and cases 8 features (end of sprint 2)
Read More