Late last year, president Biden signed a bill that aimed to take a robust step toward regulating the runaway train of AI. The concern around AI has been that if not appropriately safe-guarded, there could be severe consequences for society. From deep fakes to robocalls with voices of famous people, it is clear that AI can be a powerful tool for both good and evil. Since the evil part is here and now, the discussion is intensifying about how much regulation is appropriate. Of course, US government can regulate AI here, but what about China? Russia? What if we slow down to consider the consequences and build safe-guards and they race to build the types of AI solutions that we are afraid of and cause harm to us and our system? After-all, those systems don’t have the checks and balances that we have in our system of government and one bad actor can decide to forge ahead with a harmful system.
The president’s executive order, termed “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, directs a broad range of actions around new standards for AI that will impact many sectors, and it articulates eight guiding principles and priorities to govern the development and use of AI.
EO’S EIGHT GUIDING PRINCIPLES AND PRIORITIES FOR DEVELOPMENT AND USE OF AI
- Safety and Security: AI must be safe and secure, which will require rigorous testing, evaluation and monitoring of AI systems, along with labeling and content provenance mechanisms to foster transparency and trust.
- Innovation and Competition: Promoting responsible innovation, competition and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
- Commitment to Workforce: The responsible development and use of AI requires a commitment to supporting American workers.
- Equity and Civil Rights: AI should not deepen discrimination and must comply with federal laws to advance civil rights.
- Consumer Protection: The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
- Privacy: Americans’ privacy and civil liberties must be protected as AI continues advancing. AI is making it easier to extract, re-identify, link, infer and act on sensitive information about people’s identities, locations, habits and desires.
- Government Use of AI: It is important to manage the risks from the federal government’s own use of AI and increase its internal capacity to regulate, govern and support responsible use of AI to deliver better results for Americans.
- Global Leadership: The federal government should lead the way to global societal, economic and technological progress, as the United States has in previous eras of disruptive innovation and change.
While all of this makes sense, implementing it in the real world will not be easy, especially in healthcare. The Executive Order’s healthcare part requires the establishment of a Task Force to develop a strategic plan that includes policies and frameworks—possibly including regulatory action, as appropriate—on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health), and identify appropriate guidance and resources to promote such deployment.
I believe that federal-level regulation will provide clarity and protection for the use of AI in healthcare. As such, I applaud this move and think it’s a step in the right direction. Until we see the regulation that will be developed, it will be hard to predict if this will be a net driver or barrier for health AI.
There is, however, a rather concerning development in Georgia. Georgia has drafted two pieces of legislation focused on the intersection of healthcare and AI. Most recently, proposed House Bill 887 (HB 887) introduces restrictions on the use of AI in the delivery of healthcare. It follows the recently enacted HB 203, a first-of-its-kind state law expressly permitting the use of AI tools in clinical settings. In contrast to HB 203, HB 887 cuts across care settings to address the general use of AI in a variety of healthcare settings in Georgia. If HB 887 becomes law, it has potential broad implications for any healthcare provider in Georgia. HB 887 also proposes similar restrictions on the use of AI in automated decision-making tools in public assistance and insurance coverage.
The proposed legislation uses a definition for “artificial intelligence” that is extremely broad and establishes a new definition for the term “automated decision tool.” HB 887 defines “artificial intelligence” as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing a real or virtual environment. It defines an “automated decision tool” as a system or service that uses AI and has been specifically developed and marketed, or specifically modified, to make (or to be a controlling factor in making) consequential decisions.
Notably, HB 887 prohibits clinicians from enacting healthcare decisions solely on results produced from AI or automated decision tools. In addition, HB 887 requires any healthcare decision-making process that results from the use or application of AI or automated decision tools to be meaningfully reviewed in accordance with the procedures established by the Georgia Composite Medical Board
It remains to be seen if this will become law in Georgia but state-level regulation is concerning and can result in a patchwork of laws and regulations and make it difficult for companies building these solutions. It can also slow down innovation due to the required reviews by State Boards and other organizations.