One key barrier in AI adoption in healthcare is med-legal implications of using AI algorithms that make predictions and provide recommendations. Clinicians could rely on these to make key decisions about patient management. What if there are issues with those recommendations? What if the patient is harmed as a result of the AI recommendations? who’s responsible for the AI errors? Consider an algorithm that flags patients for sepsis risk before physicians or nurses would notice based on patterns in their vital signs. The algorithm may miss some cases and falsely flag others. If clinicians depend too heavily on the algorithm to generate an alarm, they are still responsible if a patient becomes septic
As AI capabilities are incorporated into medical care, the responsibility for clinical decisions can become unclear, and in that gray area lies potential liability if adverse events occur. Organizations must be vigilant to understand how those decisions play into a patient’s care. Take, for example, a radiologist who depends on AI to make the first analytic pass of images to find those that require further review. If the AI misses an image that shows an abnormality, a plaintiff might successfully challenge the trust that the physician placed in the algorithm. On the other hand, once the standard of care has expanded to include AI for reading images, radiologists who miss a malignancy could be held liable for not using AI in the evaluation.
If a device relies on a biased algorithm and results in a less than ideal outcome for a patient that could possibly lead to claims against the manufacturer or a health organization. And a clinician relying on a device in a medical setting who doesn’t account for varied outcomes for different groups of people might be at risk of a malpractice lawsuit. Addressing and preventing such legal risks depends on the situation.
When an organization is going to subscribe to or implement a tool, it should screen the vendor: Ask questions about how an algorithm was developed and how the system was trained, including whether it was tested on representative populations. If it’s going to be directly interacting with patient care, consider building the device’s functionality into informed consent, if appropriate.