How Do You Measure The Quality of Test Automation

Share

How do you evaluate a test automation framework? Jana Djordjevic, a Senior Test Consultant from Endava, shared with us several tips and tricks. She added that we do not need to follow them blindly. However, she was certain that knowing them would have helped her greatly when she first started building and maintaining test automation frameworks. While we were considering software testing the question came up, it better to invest time in improving an existing solution or build a new test framework from scratch?

The story of a test framework in bad shape

The team received a report on the state of an existing test automation framework. It was clear from the look on their faces that the news was not good. The test automation framework was in bad shape and it was necessary to make a decision about its fate.

Spaghetti code as such was hard to read. The tests were flaky, passing at one point, falling the next. Duplicated code appeared in a lot of places. The existing methods did not clearly indicate what they were used for. Generic test names were the cause for additional concern. If the test failed, one could not say for sure what the problem was nor where. The analysis of the results alone took too long. Thousands of lines of code with only a few days left to decide.

Still, this was no big surprise. The application changed both the purpose and the users, and its history was very long. Many people came and went, in the end several different people worked on the same framework at different times. Clearly, the initial idea was difficult to follow. Another reason for this mess was the change of technology stack. The complexity of code made the framework unsuitable for change. Any modification would have caused a domino effect and the need to revise other methods or tests in order to retain the same functionality. Maintenance became a nightmare. Managers driven by the wrong motives insisted on writing new tests within the existing framework. They kept on postponing code refactoring for sprints in the future because other things were always a priority. The suboptimal tests were tolerated at first. Although slow, they were accurate. However, the number of such tests increased over time. The execution of regression tests required more and more time. In addition, semi-automation came into play where the files were uploaded manually, then the scripts were run, after which the results were recorded manually. This resulted in more manual testing and the extension of the release cycle. Maintaining the framework itself was everybody’s and nobody’s responsibility. Tests were failing, one by one, until the test framework met its technical end. The decision was clear – a new test framework would be built from scratch.

Test framework fit as a fiddle

The first phase was planning. Jana gathered her teammates and colleagues who worked on similar projects. Together they decided on the testing scope, got familiar with the application and the targeted users and talked about who would use the test framework. They formed a team that had the necessary knowledge and skills, and they all wanted to learn something new. Product owners, developers and devops engineers were involved in the implementation. They considered ways to implement CI (Continuous Integration) development practice. The product owner and project manager were made aware of the importance of maintaining the test framework as a priority. They assigned the responsibility for maintaining and monitoring the test framework to a single team member. Business analysts gave them a clear picture of the direction in which the application would develop. They pledged to plan ahead and notify the team of changes to the application in a timely manner. These were excellent foundations for building a healthy test framework from scratch.

As the test framework evolved, so did the test coverage. The tests initially covered all basic functionality. Progressing from sprint to sprint automated tests covered all new functionality. As the execution time for automated tests was significantly shorter than the time required for manual testing, EMTE (Equivalent Manual Test Effort) was getting better. Writing new tests required less and less intervention in the test framework, and thus became faster. Methods were written to be called as many times as possible. More and more code was in the framework, and less in the test itself. Test coverage had grown exponentially over time as less and less time was required to add new tests. All they needed to do was call a method and a data set. Code reuse was at a high level. The team followed the test naming convention. As soon as the test failed, they would look for the root cause. Debugging became simpler as one knew exactly where the potential bug was. If it turned out to be a code error, they would immediately open a bug that would be resolved soon. However, if the test result was a false positive, the test itself would be updated. There was a high level of reliability in terms of the difference between the percentage of all tests and the percentage of false-positive (false alarm) tests. There were no false negatives because the tests were sequenced in a way that a single set of validation steps was followed throughout the test execution. With the test framework well organized, a new team member was able to find artefacts (methods, test reports, database connectors, jar files, etc.) in a very short period of time. Regression testing results were available quickly. The team was aiming for the regression testing to last a few minutes. With a test framework like this, there was no way they could have a bug without knowing it. The code was easy to read and easy to understand, even to those who had little experience in coding. By using backdoors to increase the testability of the application, they managed to have independent tests which enabled parallel test executions. Each test tested one scenario. A set of data created for testing was well defined. They used mock-ups and stubs to avoid depending on other parts of the system. Good practices they followed, checking each other’s pull requests and learning from them, gave the team new ideas for improving the framework. Code quality and coding standards were at a high level as the team also used tools for static code analysis. All metrics were indicative of a high-quality test framework. The progress was well documented. The act of introducing a new member into the test automation team was a test of the quality of the framework itself. Since the introduction went smoothly the verdict was clear – the framework was fit as a fiddle. Lastly, Jana shared a few tips for testing the tests. Both testers and developers agreed on introducing unit tests into a test framework which would guarantee its stability. They were proudly looking at the reports waiting for them after the lunch break. The framework did a great deal of work on its own, saving team members both time and focus.

Meetup – Endava, Belgrade

There was a need and desire to share these experiences with the testing community which took place on February 20, 2020. Jana shared the tips and tricks for measuring the quality of test automation in front of a vast number of colleagues. The organizers set up additional rows of chairs to comfortably accommodate all the attendees.

Slavica Mastilovic Sulica Tekst Slika 1

Jana had a Slido Q&A platform on the projector screen and invited the attendees to engage. They gladly accepted the invitation and started posting their questions. The questions and opinions kept on popping up during the two-hour presentation. The attendees gained a great amount of new insights and information. After the meetup, there were still those who were eagerly waiting for Jana in front of the conference room to ask her a few more questions and share their opinions on the topic.

You can watch the entire meetup here.

Tags:  #automation_of_testing   #qa   #quality_assurance   #quality_of_test_automation   #software_testing   #test_automation   #test_framework   #testing

Share

Sign in to get new blogs and news first:

Leave a Reply

Slavica Mastilović Sulica

Software Tester @ Endava
mm

Slavica joined Endava through an internship for a QA position. Last few years she was working as a professor of mathematics and computer science in public and private schools, including a school for gifted children. Slavica attended a few Oracle Academy certificated courses, including SQL, PLSQL and Java programming. When she is not testing software, she likes tasting different types of food and enjoying a new taste from all world kitchens. Public events are particularly appealing to her, which necessarily includes a good company.

Sign in to get new blogs and news first.

Categories