The third chapter in my recent book, AI Doctor: The Rise of Artificial Intelligence in Healthcare, is titled “Barriers to AI Adoption in Healthcare”. If anyone has ever been involved in trying to introduce new technology into the practice of medicine, they know how difficult it really is. The reasons are numerous but the overarching issues are that patient safety is very important and any new clinical tool has to be vigorously tested to ensure safety. Another umbrella issue is that the introduction of new tools into existing workflows has proven to be a herculean task. That’s because these workflows have been developed over decades to be compliant with regulatory requirements, protect patient safety, and ensure care team collaboration. Introduction of new technology means disruption to these well-established workflows. What’s worse is that often these tools make everyone’s life more difficult and not easier. Why does technology help with automation in industries like travel, hospitality, mobility, and more but less automation in healthcare?
One big explanation for that is that automation of clinical or administrative or operational activities in healthcare has been so difficult is the fragmented nature of the data. While Amazon and Netflix can improve your experience for shopping and streaming by having only a slice of your data in those areas, same is not possible in healthcare. Providing decision support for your doctor is not really possible if a chunk of your medical history is missing. Submitting a complete medical claim for payment is not possible if part of the patient’s work-up is not included in the documentation submitted. Insurance companies are looking for any reason to deny or reduce payment. So, if you’re leaving it up to AI to process and submit your claims autonomously but your radiology reporting system is not connected to your EHR, AI can’t automate that process. While progress has been made in interoperability, it’s still a major issue. There are simply too many hubs to connect, even within a City, let alone in a State or nationally. While submitting a prior authorization for a medication may not require full interoperability, identifying health issues and gaps in care proactively and addressing them requires near complete medical records. If you had your colonoscopy at one medical center but your records at a different center show that you haven’t had it, they may reach out to you to remind you to get it. That annoying text or email shows you that they don’t really know what’s going on with your health and you’re more likely to turn off those notices or ignore them in the future.
Another major reason, in fact the first one that I discuss in that chapter, is the lack of clear evidence of clinical benefits or financial return on investment (ROI.) If you’re going through the cost and the effort to bring in a new technology, it needs to show at least one of several possible benefits. It either needs to show improvement in patient outcomes, or make the clinical productivity better, or improve operational and administrative efficiency, or increase revenues, or lower costs. In order to show improved patient outcomes, you need to do real-world prospective and controlled trials to document the promised benefits. Without that, you’re just making unsubstantiated claims and the medical community is a tough crowd to do that to. They’ve shown to be quite uncooperative with anything that does not prove its claims through well-designed studies. A good example of this is all of the various radiology AI solutions that read scans and “help” the radiologist with their workflows. All of them have FDA approval, a low bar that can be met by showing that your tool is as accurate as a radiologist in finding a defined abnormality. However, in most cases the studies to show that using the AI in addition to the radiologist improves patient outcomes haven’t been done. What does that mean for those companies? Insurance companies are not paying for them and if patients want AI to assist the radiologist in reading their scans, they need to pay out of pocket. Most are opting not to and these tools have seen limited adoption.

There are a number of other barriers such as the medical legal concerns by the providers, lack of the training of the staff in using them, lack of IT resources that can implement and monitor these tools, unclear regulatory framework, cost, and more. However, in the last 18 months, we have seen brisk adoption of some use cases such as ambient documentation, co-pilot function within the EHRs, and to a lesser extent coding. All of these are workflow and administrative use cases. There is a reason for this. These are lower risk use cases that don’t involve clinical decisions and none is fully autonomous. Doctors review the notes generated by ambient AI documentation tools and review any codes that are automatically created, and chart summaries are intended to save them time from reading a thick patient record but often that information is validated by the patient. I’ve also heard that the chart summaries have too many errors such that the doctors have to go back and check the information again, which is counter-productive. There’s also early evidence that there’s ROI for these tools. Peterson’s Health Technology Institute published a study that estimated lower burnout and cognitive load for physicians from the use of ambient AI documentation but the financial ROI to the health systems is not clear yet.
As for clinical AI tools, it doesn’t look like the needed studies are being done by companies and therefore adoption remains low. These studies take time and money and most companies think they can magically drive adoption of their products without documenting clinical beneifts. An RSNA study found that 81% of AI models dropped in performance when tested on external datasets. For nearly half, it was a noticeable drop, and for a quarter, it was significant. After these tools are approved, there’s no standard way to keep an eye on how they work across different scanners, hospitals or patient groups. In a recent interview with Health IT News, Pelu Tran, CEO and cofounder of Ferrum Health, opined that companies are not doing real-world studies to show that their tools perform as expected in that messy environment and that they result in improved patient outcomes. He calls for the buyers to demand solid evidence of clinical outcome improvement or financial ROI.



