The mutability and opaque nature of AI makes it difficult to determine liability for malpractice claims and professional regulatory standards. Health systems that choose to implement AI before the case law on these issues is established might increase the risk of litigation. This is a concern when products are developed without a complete understanding of how they will be used in a health care environment
It is important to remember that almost all of the medical algorithms approved to date are considered assistive devices. This means that their job is to help the physician make better decisions in patient management. That means that the clinician is ultimately responsible for any management decisions. This creates a dilemma then. If the physician does not know how the AI algorithm came to its conclusions, then how could he/she feel legally protected in relying on it to make important decisions that affect the patients’ lives?
A salient example of this are the IBM Watson recommendations in Oncology that proved to be not just erroneous but go against explicit Black Box Warnings, which could trigger a large number of cases where there has been harm to the patients, as a result of an algorithm’s recommendations. Ultimately, the decision was made by the physicians and they may bear the ultimate legal responsibility for what happens to those patients.
Another major area for possible legal issues surrounds data. This can include sharing data by providers with vendors that are developing AI solutions, movement of data out of the medical centers to run the AI models and the potential for privacy and security breaches, and data-sharing arrangements that can be seen as profiteering from patient data. Google and Chicago Medical Center were named in a class action lawsuit when a former patient said the organizations failed to properly de-identify sensitive medical data. Memorial Sloan Kettering also faced backlash with IBM Watson in 2012 with documents alleging the Watson supercomputer gave unsafe recommendations for treating cancer patients. In a separate incident, ProPublica and The New York Times exposed that three members of the cancer center’s board held investments in Paige.AI, founded in early 2018 with $25 million in venture capital.
The investment included an exclusive deal for Paige.AI to use Memorial Sloan Kettering’s archive of 25 million patient tissue slides — an arrangement that received backlash from pathologists at Memorial Sloan Kettering upset that the founders received equity stakes in a company that relies on their expertise, work and the data associated with it. These and other early partnerships between health care organizations and AI companies have gained highly unfavorable attention, largely because of fears of data misuse. This is likely to continue as patient privacy advocates critique the contract structure of vendor-provider data analytics partnerships. The crucial issue is whether enough has been done to protect against the patient’s identity being recovered through assimilation of the anonymized medical record with other data that the AI company has access to.