Major health plans along with health technology companies like Philips and Ginger collaborated to develop a new standard to advance trust in artificial intelligence solutions. Convened by the Consumer Technology Association (CTA), a working group made up of 64 organizations set out to create a new standard that identifies the core requirements and baseline to determine trustworthy AI solutions in healthcare. The standard has been accredited by the American National Standards Institute.
“AI is providing solutions—from diagnosing diseases to advanced remote care options—for some of health care’s most pressing challenges,” said Gary Shapiro, president and CEO of CTA. “As the U.S. health care system faces clinician shortages, chronic conditions and a deadly pandemic, it’s critical patients and health care professionals trust how these tools are developed and their intended uses.”
The consensus-driven standard considers three expressions of how trust is created and maintained—human trust, technical trust and regulatory trust. Human trust looks specifically at topics related to human interaction and perception of the AI solution, the ability to easily explain, user experience and levels of autonomy of the AI solution.
Technical trust specifically considers topics related to data usage, including access and privacy as well as data quality and integrity—including issues of bias—and data security. This area also addresses the technical execution of the design and training of an AI system to deliver results as expected.
Regulatory trust is gained through compliance by industry based upon clear laws and regulations and information from regulatory agencies, federal and state laws and accreditation boards and international standardization frameworks.