The concerns around regulation of AI-based solutions in healthcare is real. Given the unique challenges of labeling for AI/ML-based devices and the need for manufacturers to clearly describe the data that were used to train the algorithm, the relevance of its inputs, the logic it employs (when possible), the role intended to be served by its output, and the evidence of the device’s performance, regulation of these solutions will not be easy.
Bias and generalizability is not an issue exclusive to AI/ML-based devices. Given the opacity of the functioning of many AI/ML algorithms, as well as the outsized role these solutions may play in health care, it is especially important to carefully consider these issues for AI/ML-based products. Because AI/ML systems are developed and trained using data from historical datasets, they are vulnerable to bias – and prone to mirroring biases present in the data.
The FDA recognizes the crucial importance for medical solutions to be well suited for a racially and ethnically diverse intended patient population and the need for improved methodologies for the identification and improvement of machine learning algorithms. This includes methods for the identification and elimination of bias, and on the robustness and resilience of these algorithms to withstand changing clinical inputs and conditions.
Gathering performance data on the real-world use of the SaMD may allow manufacturers to understand how their products are being used, identify opportunities for improvements, and respond proactively to safety or usability concerns. Real-world data collection and monitoring is an important mechanism that manufacturers can leverage to mitigate the risk involved with AI/ML-based SaMD modifications, in support of the benefit-risk profile in the assessment of a particular marketing submission.
One thing that is very clear is that the agency is becoming more comfortable with reviewing these solutions and. the time from filing to clearance is getting shorter. This was apparent in the time difference between the two algorithms approved for Aidoc over the span of three years. Aidoc’s CEO, Elad Walach mentioned after the approval of their second algorithm for diagnosis of pulmonary embolism that Aidoc’s previous work with the FDA has helped to create a more reliable and efficient regulatory pathway leading to the new approval, which took 12 months less than their first clearance.