While there has been an explosion of FDA-approved health AI products, most have not yet gained much traction in the everyday clinical workflows. There are several reasons for this. First, it’s important to understand the difference between how the FDA provides approval or clearance to a drug or medical device versus how it tackles a health AI product. In order to secure FDA approval for marketing, drugs and medical devices need to go through several phases of pre-clinical and clinical testing to prove the safety and efficacy of their products. This includes doing studies in the real-world where patients take the drugs or use the devices and their experiences are followed and documented in clinical trials. The end result of all of this is a very robust documentation of the benefits, side effects, and adverse reactions caused by these products in the real world. This documents the magnitude of the benefits on the patients’ outcomes in terms of lower mortality, higher quality of life, less pain, better vision or hearing, or any other measurable benefits. These same clinical trial results are used to secure reimbursement from insurance companies and depending on the magnitude of the outcome benefits and the safety profile of each product, that reimbursement amount would be determined.
This path is tried and true. It is responsible for the trust we have to walk into a pharmacy and pick up an antibiotic, pain medication, or a cancer drug and not be worried about whether it’ll cause harm to us instead of addressing our health issue. Yes, it’s long and expensive to go through this process. But it creates confidence for all stakeholders involved and results in mostly frictionless commercial transactions. Yes, regulation is actually an important part of a successful capitalistic system. If you have regulation that ensures the safety of the products sold to consumers, including medical products like drugs and devices, that creates confidence on the part of the sellers and buyers. If the medical centers are buying these products, they feel confident that they’ve gone through the necessary testing and are ready for prime time. They don’t have to wonder about how these products will perform at their institutions or with their patients. This makes the job of the makers of those products much easier post-approval. Also, to convince the doctors to use, recommend and prescribe these products, you only have to worry about educating them on the evidence you’ve already produced and used to secure regulatory approval. This includes convincing them to use it for the appropriate patients in their practice. The fact that your product has some efficacy and is safe is not in question since you’ve cleared the regulatory hurdles.
Same can not be said of the emerging category of health AI products. So far, these products are only required to show that they can perform as well as a clinician in a LAB ENVIRONMENT! What? Yes, you read that right!! In order to get clearance for a radiology solution that reads a cerebral hemorrhage, you develop and validate your AI model on limited datasets, and then you can submit for clearance by taking a few hundred head CTs and show that your model can find the hemorrhage roughly at the same rate as a few independent radiologists. The FDA does not require you to test your system in a real-world trial where your product is used to diagnose cerebral hemorrhage when a patient presents in the emergency room. This means that you are not presenting evidence that your product works as well on real-world setting as it does in your lab-based validation testing. That means you have not yet proven that your AI model will perform up to expectations on real patients in a hectic clinical environment where the patients may have potentially different types of backgrounds than the type of patients you trained your model on. If your miss rate is too high in that environment, it may present a risk to those patients if the radiologists are actually relying on the AI product to help diagnose those patients. If you’re diagnosing too many false positives when exposed to the real-world mix of patient types, the medical staff will lose confidence and patience with the AI product and stop relying on it.
What this means is that when you show up to a medical center to sell your AI solution and you tout your FDA clearance, there is a skeptical and confused audience waiting for you. You do not have a high-quality dossier that includes the results of a multi-center trial that clearly documented the efficacy of your solution and the magnitude of the improvement in patient outcomes. You do not have well-documented rates of misses and false positives for your product from several centers that resemble their patient population. The safety of using your product is not fully documented. As such, the only thing you’re showing them is that in a controlled environment your solution was accurate as a few radiologists in reading only a few hundred scans. This doesn’t clear the hurdle for most clinicians and medical centers. This explains the low adoption rates for many of the FDA-cleared solutions. I have received more than a few calls from investors in companies with FDA-cleared health AI solutions who were frustrated at the fact that they were not hitting their revenue targets and were looking for help in accelerating the commercialization of their products. Unfortunately, I had no magic potion that I could offer them to do that. Given that clinical products have to prove their efficacy and safety through well-powered, real-world, prospective studies, the FDA is doing these AI solutions no favors by lowering the bar for clearing them. This shifts the responsibility of deciding if that evidence is acceptable for adoption to each medical center and provider.
This creates friction and friction means slower adoption and some very frustrated entrepreneurs and investors!! This is why less regulation is not always better. While I think the FDA has some very good reasons for not asking for the same level of evidence as drugs and medical devices to approve health AI products, ultimately health AI adoption will require more rigorous testing of their training and validation data, documentation of their real-world performance, and quantifying patient outcome improvements. As such, there are now additional organizations that are stepping in to address this need and provide the type of certification that will help medical centers and providers to stop worrying about these issues and start using these solutions.
In the next post, we will discuss some of these emerging developments and how they may impact the adoption rates of health AI solutions.