Agile Compliance - Bridging the Testing Gap between Agile and Regulation in Medical Devices

In agile software development, testing and user validation is key to the successful delivery of beautifully working software. To minimize the risk of finding bugs late in the development, testing is done as early as possible and preferably using automated tests running after each change of the software.

Typically there are two specific phases in which testing is done and evaluated: Test and Review.

As there is a risk that changes introduced by new user stories break things done before, the common goal is to automate as much of the testing as possible. If well done, the testing will ensure that after each cycle the product is more complete and better than before, meaning no new bugs are introduced breaking things which worked before.

The agile workflow also involves customers as early as possible to get feedback about the usability. If there’s any critical or difficult workflows they normally will be implemented early on to be sure they work perfectly. The feedback received is used to plan the next iterations.
All of this is great to get a stable software which lives up to the customer's needs. However the regulatory environment requires some specific documentation to be in place to provide not only the proof that some tests passed but that everything has been adequately verified and validated. 

The gap with the regulatory required documentation

Auditors don’t care which automated tests passed or failed during the test cycles before the final software is released. They are of course happy to see that there is continuous automated testing, but that a test passed a long time ago in a completely different code base has not much value for a release done much later. This is why the automated tests are normally integrated in the continuous build in the first place. Therefore it is also not necessary to supply the results of the continuous automated testing done on versions other than the final release as part of the regulated documentation. 

What is far more interesting is the planning of how the automated tests are executed, what the code coverage should be along with the evaluation of what has been done and achieved for a specific release.

Also, for a medical device automated testing is not enough. You need to have defined requirements with traces to the tests descriptions and to the executed tests for a specific release. The test descriptions need to be reviewed: do they cover everything of the linked requirement(s). These reviews need to be documented as a proof that the whole set of requirements is sufficiently covered by tests.

The executed tests need at least human supervision and need to provide the ‘objective evidence’ that the test has passed. Of course you need to show that all tests defined have passed for the release version of the software, or if they passed only for an older version you need to have good arguments why there cannot be any side effects or changes rendering the previous results invalid.

For each test execution you need to log information like when and which environment was the test executed by whom. For each test step you need to have proof that it passed and if possible supply the ‘objective’ evidence. In case there’s some bugs, these need to be documented and handled (can you still release, what to communicate to customers about these issues etc).

How to still benefit from automated testing 

The holy grail is of course to use automated testing to ease the creation of audit and certification ready documentation. Our suggested way of doing this is to:

  • during each sprint write or update the tests description for each created or changed requirement;

  • write them in such a way that each test step has a unique id;

  • write the automated tests in such a way they are linked to the documented test steps;

Once you have a release candidate, run all the automated tests and fill the test results in the test forms created for the release candidate. 

As a last step, have a human go through the filled test forms to verify that the automated tests created the correct results.

About the Author
Wolfgang Huber
Co-Founder