In the last post, I discussed how a lower bar for clearing AI products by the FDA has not led to the faster adoption of these products by the medical community and in fact, it may be a major contributor to the slower adoption. When institutions have confidence that the products they’re buying have been fully tested and can review the efficacy and safety evidence, they are faster to buy and use medical products. There are reasons as to why the FDA is setting the bar lower for AI products. These products change over time as they’re exposed to new data and can improve as they are fed feedback about the accuracy of their output and continuously trained to be more accurate. Why, then, would you force a company do years of prospective trials to show the accuracy of their model when that model will change over time as it is exposed to real-world data? Would the company then need to go through years of running new clinical trials to prove the efficacy and safety of the updated model? No company would be able to keep that up. You can see that this is not an easy decision and there are many factors to consider.
The decision to allow AI products to be released with light evidence of accuracy on retrospective data is also a function of the FDA’s bandwidth to handle such a high volume of new products being submitted for approval all of a sudden. Carefully evaluating the training and validation data for a model, something that many people think the FDA should do before approving a product, takes significant manpower. This means radically increasing the funding for the FDA to hire and train a much larger staff to handle this increased volume of product submissions. This assumes that there are those people available with the right backgrounds. Given that AI is an emerging field, one of the key adoption barriers is that there are not enough people with the right skill sets to fill the key roles at the regulatory agencies, medical institutions, reimbursement bodies, and more. So, how do you handle all of this given that the entrepreneurs and investors are hungry to go after the commercial opportunity? Well, this has been a moving target but the FDA has created several approval pathways, including 510 clearance for AI as an assistive device for medical professionals, Software as a medical device (SaMD) pathway, and De Novo pathway where the FDA certifies a company as having the right processes to test and monitor their products so they don’t have to keep going back to get approval for every iteration of their already approved product. All of this is meant to introduce flexibility and be a way to come out of the gate while the agency (and everyone else!) catches up with the capabilities of the technology.
It’s important to acknowledge that while the FDA has taken positive steps and tried to be deliberative in their approach to regulating this new technology, their approach so far has yet not created confidence in the medical community such that an FDA cleared product viewed as effective and safe. While frustrating for all the stakeholders involved, given the limitations in funding and manpower, it is what is! The agency is trying to keep up with this fast moving field and it hasn’t quite figured it out yet. As I discuss in my book, AI Doctor: The Rise of Artificial Intelligence in Healthcare, the FDA is not the only body that can be relevant here. While FDA clearance is needed for any medical product, the types of certification that can give the buyers and users of these products the confidence to move faster with their purchase and adoption can come from other sources. CHAI, The Coalition for Health AI, launched in December 2021 by some key stakeholders to develop consensus and arm health IT decision-makers with academic research and vetted guidelines to help them choose responsible technologies that ensure equitable benefit for all patient.
In April 2023, The Coalition for Health AI (CHAI) released its first blueprint for the effective and responsible use of artificial intelligence in healthcare. Its objective is to generate standards and robust technical and implementation guidance for AI-guided clinical systems. The Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare aims to ensure ethical, unbiased and appropriate use of the technology, to combat algorithmic bias and to define fairness and efficiency goals up front. While this provided guidance to the developers of these technologies in how to build ethical and unbiased products and provided a framework for the buyers of these technologies to evaluate the technologies, applying those standards has been left up to those stakeholders. As you may imagine, while it’s good to have a framework that you can use to evaluate the AI solutions coming at you from all directions, you still need to have the expertise to apply the framework and the bandwidth to engage in that process. Bandwidth is not something that is available in large quantities at medical institutions’ leadership level or for their providers. If they need to take on the process of applying these published blueprints and frameworks, it introduces friction and as we’ve discussed in the last 2 posts, when you have friction in a commercial process, you will have delays and slow uptake. As such, while this moved the ball forward, it did not untangle this complicated web that the buyers and providers find themselves in.
Most recently, there are a set of developments that can actually offer meaningful solutions to the issue of slow adoption and address the key questions in the minds of the buyers and users of health AI technologies. It involves CHAI and their latest proposal for creating AI “nutrition labels” for products using key criteria that are important for medical institutions. Also, it is proposing creation of assurance labs that can test these products against the criteria and issue these nutrition labels. They are issuing certification criteria for assurance labs that can fully test and certify medical AI products for being safe and effective. In the next post in this series, we will get into the details of these two developments and how they can be instrumental in accelerating the adoption of AI in healthcare.