As we discussed in the last post in this series, there are new developments in the regulation of health AI products. While the FDA is one key body in clearing these solutions for their efficacy and safety, given that they’re not requiring real-world studies of the safety and the outcome improvements with these solutions, their stamp does not necessarily mean rapid adoption by the medical institutions and providers. As a matter of fact, it’s been quite the opposite. Given that the providers are not presented with that type of real-world, prospective studies documenting the benefits and the risks, they have not see the FDA clearance as a reassuring milestone that spurs them into action. As such, they’ve done limited pilots and smaller experimentations with these solutions to generate their own evidence and figure out the ROI and performance before scaling up the adoption. In fact, in most cases those pilots have not resulted in widespread adoption since the solutions did not perform as well as hoped in the real-world environment. With this background, the recent developments with The Coalition for Health AI (CHAI) are rather significant and can mean a turning point for the industry.
First, CHAI announced that they will soon release so-called CHAI Model Cards – which the group likens to ingredient and nutrition labels on food products. This means that when you are examining a health AI model, this Card, analogous to a nutrition label, gives you key information for the criteria you’re interested in before adopting a solution. The Model Card incorporates data quality and integrity requirements derived from FDA’s guidance on the use of high-quality real-world data, as well as testing and evaluation metrics in alignment with the National Academy of Medicine’s AI Code of Conduct. The draft model cards has standardized template designed to show a degree of transparency about algorithms, presenting certain baseline information to help end users evaluate the performance and safety of AI tools. That information includes the identity of the AI developer, the model’s intended uses, targeted patient populations and more. Other performance metrics include security and compliance accreditations and maintenance requirements. The cards also have information about known risks and out-of-scope uses, biases and other ethical considerations. This is following two other documents that CHAI has released in the past 2 years: its Blueprint for Trustworthy AI Implementation Guidance and Assurance in 2023 and its draft framework for responsible development and deployment in June 2024.
The model cards are intended to comply with the HTI-1 requirements (released by ONC) and meant to be an easily legible starting point for organizations reviewing AI models during the procurement process, and for EHR vendors seeking to comply with the Health IT Certification Program. This creates not just a starting point for organizations that want to use AI solutions to improve patient care, streamline operations, and improve their business models, but also an outsourced assessment of the solutions they’re considering. This is critical as most do not have the expertise to undertake this type of assessment and most likely do not have the organizational bandwidth. Since the model cards will take into account regulations and government requirements for responsible AI, this addresses other key concerns around mode’s bias and equity that these institutions and users need to be mindful of.
So, who scores these AI solutions for the model cards? Well, that’s the second key development. CHAI is also championing the creation of a national network of independent assurance labs for healthcare AI. CHAI is creating a draft framework for certifying the future assurance labs. These assurance labs will test health AI models using the model card and provide a certification that these companies can use when commercializing their technologies to healthcare customers. This creates a much needed set of standards that all models are tested against and removes some major friction for the customers when considering buying these solutions. All of the AI assurance labs will undergo a certification process to ensure they meet appropriate integrity requirements and CHAI will announce the first two assurance labs by the end of the year. Among the requirements for certified assurance labs are divulging any conflict of interest and requiring data quality and integrity standards that CHAI gleaned from the Food and Drug Administration (FDA). Assurance labs are tasked with evaluating the testing data used to build the models, and making sure that testing data is very representative for any given health system as a customer.
There is much more work remains to be done in optimizing the model cards and these assurance labs. The assurance labs need to be certified such that they meet transparency and conflicts of interest requirements to avoid pay for play between the labs and the health AI companies. Also, making sure there’s consistent standards will prevent the developers from “assurance lab shopping”. Meanwhile, these are significant and positive developments for the industry and hopefully in the near future for the medical centers, providers, and patients.