Many of the concerns around AI revolve around how the technology reinforces bias within the healthcare system. There are two entry points at which bias can seep into an AI tool. The first is in the design itself: The biases of the design team imprint upon how the system makes decisions and learns from its experiences. So bringing together diverse design teams is key. The second is in the data used to train the system. In healthcare, many of the common data inputs – claims data, clinical trial data – reflect biases in how care has been delivered in the past.
A paper in the Journal of American Medical Informatics Association, which argued that biased models may further the disproportionate impact the coronavirus pandemic is having on people of color. “If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden,” wrote the coauthors. “These tools are built from biased data reflecting biased healthcare systems and are thus themselves also at high risk of bias — even if explicitly excluding sensitive attributes such as race or gender.”
Beyond basic dataset challenges, models lacking sufficient peer-review can encounter unforeseen roadblocks when deployed in the real world. Scientists at Harvard found that algorithms trained to recognize and classify CT scans could become biased to scan formats from certain CT machine manufacturers. Meanwhile, a Google-published whitepaper revealed challenges in implementing an eye disease-predicting system in Thailand hospitals, including issues with scan accuracy.
All of this points to the fact that implementing AI models is not as easy as finding or developing a good model, connecting it with your information systems, and letting it do its magic. Health systems need to have a robust governance process in place to vet the models before they are used and monitor their effectiveness and output.