Don’t miss the next posts! Receive an alert!

Ennov PV – An Auto-Testing Revolution

February 13, 2019

[Reading time 5 minutes]

Acceptance testing has always been an important part of our lives at Ennov PV. Understandably, our customers have high expectations with respect to quality in the products that we deliver. Pharmacovigilance software is synonymous with reliability and dependability and it has always been our goal to test our software as thoroughly as possible. Nonetheless, as with any development activity, there is always a chance of introducing errors; the complex nature of our products and the high degree of configurability mean that regression, or unexpected behaviour in other areas of the system, may occur as an unanticipated consequence of changes to the code.

To counter the risk, we have deployed a broad range of testing strategies to mitigate. All of our development stories are tested a number of times:

  1. By the developer, against a set of pre-determined acceptance criteria (created with input and impact assessment by the collective team)
  2. Through a developer peer-review process called ‘code review’
  3. By an expert QA specialist, tasked with cross-checking the fulfilment of the acceptance criteria and encouraged to use their knowledge and experience of the product to ‘test-to-break’
  4. A formal test execution designed to verify the acceptance of the changes
  5. Finally a manual regression test prior to release

With our ongoing web-development projects, we have introduced three new significant improvements to the process that further strengthen our testing and code quality strategy.

Unit testing

When developing individual stories, our engineers are tasked with the inclusion of suitable ‘unit tests’ in their code. These are so-called as they confirm the structure and functionality of individual small components (or ‘units’) of the application code. The testing is built into the fabric of the application itself, and is executed every time a build is produced – if a unit test fails, the application cannot be built successfully. As a result, the code is effectively testing itself at the level of its constituent parts.

A specific set of these unit tests are used as general ‘smoke tests’ to ensure the general integrity of the application at the time of building the code into a useable application.

Not only does unit testing promote a more considered and organised development approach (the engineer has to think through their design in a way that supports the testing), it means that the code is routinely and consistently checked for integrity prior to release to the testing team. The net result is the delivery of better and more reliable code to the QA team.

Sonar code scanning

We have also implemented the use of a novel technology called ‘Sonar’ (https://www.sonarsource.com/) to continually and automatically examine the code for compliance and consistency.

Each sprint we review the results of the Sonar scan and apply any recommended corrective action. We are taking advantage of tools like this to improve our own internal standards and promote quality wherever possible.

Automated regression testing

The manual regression testing has proved invaluable in trapping unwanted side effects prior to release, but is time-consuming to execute and occurs only at a late stage in the release cycle.

Since the beginning of the year, the QA team have at their disposal a fully automated regression test suite for PV-Entry using Selenium. The suite is executed in full, every night, against the latest built code served up by Jenkins (the build management tool we use). A confirmation report is generated and circulated automatically; any issues arising from the previous days coding are immediately addressed.

Each execution consists of more than 260 distinct tests (extrapolated from the original 80+ tests in the manual plan) and takes around 100 minutes to complete. In practice, this means we can continually evaluate all core functionality and eliminate potential regression throughout the development cycle. The content of the automated plan will continue to evolve as new functionality is added to the application.

Furthermore, we have the opportunity of running duplicate copies of the test suite against different known configurations of the application. It is anticipated that this would reduce the phenomenon of a feature that is added deliberately for one customer, being viewed as a detrimental change by another customer with a different configuration or business use case.

Benefits of automation

As a direct result of these changes to our testing strategy, we can have much more confidence in the reliability of the application code at any given build. This means that as we move forward with the new modules we will be able to provide more regular, more manageable updates to our customers that we (and they) can be confident in. In time, this will help to reduce the long-term problems of long issue backlogs and customers falling significantly behind in released version (owing to the pressure of extensive re-validation).

NB: In addition to the advancements described, ALL development stories are still manually tested by the developer, a development peer during code review, and a member of the QA team in accordance with our procedures. We are however, able to consolidate the formal paper based testing (step 4 above), and the manual regression testing (step 5 above) into this new automation. Additional manual testing will still be used for all high-risk stories (identified during backlog grooming) that carry additional regulatory risk.

All three new methods are being utilised for all of our ongoing web development projects, including PV-Entry, PV-Reporter, & PV-Admin. Whilst quality has always been our primary objective, automation is becoming one of the cornerstones of our testing and quality strategy.

Thanks for your interest!

Nic

Share :

Facebook
Twitter
LinkedIn

Read more post

RIM Analytics: Dashboards with a Purpose

Effective Regulatory Information Management involves a constant stream of prioritizations, decision-making, and activity management and monitoring.  Effective management occurs when prioritization is risk-based, decisions are

Read More »

Risk-Based Monitoring Improved

On July 17th, the FDA and Duke Margolis Center for Health Policy sponsored a workshop entitled “Improving the Implementation of Risk-Based Monitoring Approaches of Clinical

Read More »

Our Journey with Scrum

[Reading time 10-15 minutes] As we approach our first eighteen months of using the Agile Scrum methodology at Ennov PV, it seems a good time

Read More »

Top 5 eTMF Features | etmf systems

Managing essential documentation for clinical trials has become quite complex in today’s clinical research environment. Study sponsors are required to implement a quality system to

Read More »

Receive an email notification when a new post is published