Evidence-based AI is not exclusive to showing that AI algorithms will improve patient outcomes, improve clinical workflows, or lower the cost of care. Currently there is a great deal of variability in risk-mitigating AI development and deployment practices. Current or continuously emerging evidence and experience with AI development or deployment will allow for mitigation of many AI risks. A paradigm for development and implementation of AI, similar to evidence-based medicine, is needed today to launch an evidence-based AI movement for health and healthcare
An article in the Journal of the American Medical Informatics Association mapped known AI risks to evidence-based best-practice mitigation strategies that could alleviate them. These risks include, among others, lack of data security, data privacy, transparency, workflow integration and user feedback. Evidence-based AI risk mitigation practices are available in three general areas: data selection and management, algorithm development and performance, and trust-enhancing organizational business practices and policies.
Specific risk mitigation practices in these areas include data encryption, securing mobile hardware, keeping detailed provenance records, performance surveillance, AI models accounting for causal pathways, adherence to accepted data governance policies, and human-in-the-loop practices. Professional organizations or associations, are critical in leading the field in reviewing available evidence, translating the evidence into practice guidelines, and educating AI developers and implementers about the need to adhere to such guidelines.
The government can establish purchasing rules that detail how AI solutions that rely on evidence-based AI development and deployment will be favored for public sector acquisitions. As we have seen time and time again, such a signal to the market can have a significant impact on industries that sell to the public sector. The government in turn would have to rely on a system that verifies solutions adhere to evidence-based AI development and deployment standards.
The government also could regulate AI solutions. As a matter of fact, the U.S. Food and Drug Administration operates the Software as a Medical Device (SaMD) certification program. In this voluntary program, SaMD developers who rely on AI in their software are assessed and certified by demonstrating an organizational culture of quality and excellence and a commitment to ongoing monitoring of software performance in practice.