How to run a large and diverse regression test in parallel on multiple servers
03-02, 10:30–11:00 (UTC), Gather Town

We were faced with the challenge to run our regression test in parallel and within a night. To make the challenge harder: not all test can run in parallel. We build a Python tool that runs 24/7 on multiple servers and accepts tasks: Execute RFW test suites. A task can be ran in parallel or not. The tool (ART_Queue) takes care of that.

The grid operators in the Netherlands were migrating their system into to a system called C-ARM which is an application that centralize energy data and calculations for the processes of allocation, reconciliation and measurement data for all gas and electric metering data in the Netherlands. The client wanted to cover a large number of test requirements by automated tests, which resulted in 1000+ Robot Framework regression test cases at the time. Running these tests in a short amount of time needed to be run in parallel. The first solution was a very large number of windows batch scripts, which worked but had a major flaw: it was only maintainable by its creator. When he left the project it became clear that making changes took a lot of time. And that the whole set of batch jobs could only be used on one test environment, so a small change of the test set lead to manual code changes to multiple (batch) files.

A new way of running the regression test set was much needed, we call this new way the ART run software. This is a Python program which reads an input file, adds some additional information and use robot commands to execute the parallel jobs and the nonparallel jobs separately. After all jobs ere finished, the run software creates test the RFW reports. The redesign resulted in fewer lines of code than there were batch files, which improved the maintainability of running the regression test.
For the first time it was easy to run our tests. And it worked like a charm.

Running a lot of test cases at a daily bases creates a lot of test evidence. For quite a while we exported the test results as csv files which then are merged into one Excel report. There were no complains about the Excel file, but we wanted an easier way to store the test results which made us try the rfw library Testarchiver. This library stores the rfw test results in a database of your choice. We linked the created database to a Grafana dashboard.

With the software running on dedicated servers, the other testers on the project were interested in using it as well. Security did not grant them access to the severs which led to the solution called ‘ART Queue’: replace the csv with a central database; where the csv was a static list on a server updated once a day, the queue is dynamic. Every tester can now add test jobs to the queue which are than picked up by the ART run software. And the results of their tests can be retrieved from Grafana and a web portal.

The ART Queue software is now running 24/7 and always able to run a test.

Working for CGI since 1996
Married, 3 children
Living in Assen (Netherlands)