From a Static Regulation Model Towards Regulation as Iterative Checkpoints
Artificial Intelligence (AI) has the power to improve health outcomes for patients because AI can distinguish patterns in data that are not discernible to the clinician. These patterns rely on a supply of health data to train machines that learn responses to diagnose, predict, or perform more complex medical tasks.
Traditional, or non-adaptive, Machine Learning (ML) utilizes two separate paths--training and prediction, whereas adaptive AI uses a single path process that monitors and learns the new changes made to the input and output values and their associated characteristics.
Software developers can use machine learning to create an algorithm that is "locked" so that its function does not change, or "adaptive" so its behavior can change over time based on new data.
Therefore, adaptive artificial intelligence and machine learning technologies differ from other Software as a Medical Device (SaMD) in that they have the potential to adapt and optimize device performance in real-time to continuously improve health care for patients.
The following are useful definitions:
Software as a Medical Device (SaMD) is software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device
Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.
Machine Learning (ML) is an artificial intelligence technique that can be used to design and train software algorithms to learn from and act on data
Algorithms are a sequence of instructions used to solve a problem.
Adaptive AI is software that self-update over time with new data.
There are challenges in the regulation of adaptive AI/ML. The FDA is therefore considering a Total Product Lifecycle (TPLC) based regulatory framework for these technologies. This framework would allow for modifications to be made from real-world learning and adaptation AL/ML while still guaranteeing the safety and effectiveness of the software as a medical device.
A potential approach the FDA might take for artificial intelligence and machine learning-driven software modifications is described in its discussion paper published April 2, 2019: “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback.”
The FDA plans to apply a risk-based strategy for enforcing device-related requirements. Additionally, it does not intend to regulate certain types of low-risk software, e.g., those users can independently check and understand the basis for the programs’ recommendations. Certain digital health technologies—such as mobile apps that are intended only for maintaining or encouraging a healthy lifestyle—generally fall outside the scope of the FDA’s regulation.
Instead, the FDA said it plans to focus oversight on
1.) Higher-risk software functions, including those used in serious or critical situations.
2.) AI/ML, where the program’s logic and inputs may not be fully explained to the user.
Following a risk categorization of AI/ML’s potential risk to the patient and its intended use, the FDA’s proposal involves treating regulation for AI/ML as a series of iterative checkpoints rather than a one-time certification model.
In establishing checkpoints that include manufacturers reporting on specific performance and safety indicators post-deployment, the SaMD regulation process allows for modifications to approved devices for greater efficacy of use.
These approaches work together to determine the level of FDA review required for new modifications. Within the TPLC approach, the manufacturer continues to monitor the SaMD by tracking use and evaluating performance. As the SaMD incorporates real-world data, it may need to be re-trained, re-tuned and possibly re-evaluated (see diagrams below). The manufacturer may need to submit a new premarket submission.
If the modification is not part of the SaMD Pre-Specifications (SPS), the manufacturer can do one of the following:
1.) The manufacturer can contact the appropriate review division to gain confirmation that the modification fits the original SPS.
2.) The manufacturer can submit a pre-submission for review.
3.) The manufacturer can submit an entirely new premarket notification.
While there are complexities in regulating AI and ML-based software, in moving away from the current static regulation model towards regulation for AI/ML as a series of iterative checkpoints, the FDA’s expectations of quality systems responsible for generating SaMD, including ensuring quality and relevance of data, and transparency of the output aimed at users, is in line with Good Machine Learning Practice (GMLP).