Last year, several key developments sparked optimism about the government and industry coming together to create certainty and provide catalysts for the adoption of much needed technologies to make healthcare more accessible and of higher quality. On the government side, President Biden signed an executive order on November of 2023 on Safe, Secure, and Trustworthy Artificial Intelligence (AI) to advance a coordinated, federal government-wide approach toward the safe and responsible development of AI. This executive order meant to create an umbrella for the national regulation of the development and deployment of AI in various industries, including healthcare. The significant of this EO was that it signaled to everyone that the federal government intended to prevent state by state regulations, which could hurt private companies that would then need to address each state’s requirements separately. I wrote in my 2024 recap post that while there’s always concern around a heavy hand by the government slowing down innovation, having a framework that provides certainty could actually good for the field.
On the industry side, Coalition for Health AI (CHAI) rolled out Chai Model Cards or “Nutrition Labels”! These scorecards (below Figure) meant to provide a way for customers of AI technologies to have a start point in evaluating the technologies they were considering for their businesses. While the field of AI is moving fast and we’ve all seen the rapid progress of gen AI over the last 2 years, the healthcare customers of these technologies don’t necessarily have the expertise or the tools to evaluate the claims of these companies. These Model Cards are meant to provide an independent assessment of these AI solutions and provide confidence to their potential customers. They evaluate these AI products for known risks and out-of-scope uses, biases and other ethical considerations, security, and compliance with accreditations and maintenance requirements. Having independent and trusted entities that could apply a standardized scorecard to health AI solutions could expedite their adoption.
The third part of these developments was the plan for creation of a national network of “AI assurance labs“. These centers would use model cards to evaluate new health AI solutions and certify them if they met the standards for safe and responsible AI products. These labs were to be a partnership between the government and industry with initial funding to create them potentially coming from the federal government. Both the model cards and assurance labs took years to conceive and were the results of government guidances and regulations and the private consortia such as CHAI creating the needed infrastructure based on those regulations. Creating standards for any emerging industry usually is one of the key developments that can help propel that industry forward faster since those developing these technologies will know what hurdles they need to clear to achieve adoption.
Along the way there was an election and a change of government and a new regulatory regime. One of the first EOs signed by the new president was one that revoked President Biden’s EO on AI and created a new approach to federal government regulation of this technology. The industry view is that this new EO aims to remove many of the safeguards put in place by President Biden’s EO and accelerate the development and adoption of AI solutions. Of course, depending on your philosophy around speed vs. safety for launch of new technologies, this may be good or bad news to you. The response to this has predictably been along the partisan lines and the new regulatory regime is too new to assess. However, there are those like the legendary entrepreneur and investor, Marc Andreesen, traditionally a Democratic supporter, who had expressed grave concerns about Biden administration’s plans to regulate AI. And, there are people in the right and left who are concerned that absent enough safeguards, rapid rollout of these technologies can benefit big tech and harm consumers if the AI models are biased and unsafe. The verdict for all of this will be delivered over the coming months and years.
For health AI and the AI assurance labs, these developments may not be good news. While regulation (and too much of it) is usually not welcome by industry, in healthcare it’s absolutely necessary. Since medical centers and healthcare providers deal with people’s lives, any tools that they use has to be known to be safe, first and foremost. Absent this, they will be in no hurry to use new technologies. Given the legal liabilities for these centers, if their patients are harmed while they use a brand new shiny health AI solution in their practice, the fault will lie with them. This is why CHAI’s proactive approach to creating the model cards and the assurance labs are such important and welcome developments for the field. In discussions with many executives at health systems or life science companies, it is clear that they don’t have the depth of expertise or staffing to evaluate AI solutions on their own. The course of action they’re mostly choosing is to wait and see if these solutions establish a track record of safety and results before they move ahead with their own adoption. This will slow down adoption in the near term.
Back in December, some GOP members of congress asked the HHS to back away from supporting government-administered AI assurance labs. The main drivers of this request seemed to stem from concerns around stifling of AI innovation through regulation and potential conflict of interest if AI assurance labs supplemented FDA’s regulatory role in this space. One of their key concerns was that “creation of fee-based assurance labs which would be comprised of companies that compete,” could result in unfair competitive advantage for big tech and negatively impact innovation. Obviously, CHAI and the previous administration had a different view of the impact of the model cards and assurance labs and the years-long efforts to create both were meant to accelerate innovation and adoption and not stifle them. The current administration has only been in place for 8 weeks and while previous EOs from the Biden administration have been revoked and new EOs issued, there is still not much clarity on where we will end up with all of this. However, given that the opposition to these assurance labs were from the GOP, we can assume that their role in supplementing the FDA’s role in regulating health AI technologies and government funding for them is in serious doubt.
There is, of course, a world in which these model cards are used to evaluate health AI solutions by a national network of assurance labs that are privately funded. This would need to be done in a way that does not introduce conflicts of interest and tilt the playing field toward the larger and well-capitalized companies over the smaller and less funded ones that may have better products but not have the resources to go through such a review. One idea is to have industry stakeholders such as health systems, life science companies, technology companies, and others provide unrestricted grants to these labs, and for the companies that submit their solutions to pay some small fees. For the time being, there seems to be silence from CHAI and others involved in these efforts. I assume they’re reassessing everything in light of these recent developments and plotting their next moves.