November 2, 2024

Advancing Digital Excellence

Pioneering Technological Innovation

Pathways to Governing AI Technologies in Healthcare

Pathways to Governing AI Technologies in Healthcare

Artificial intelligence (AI) has the potential to transform healthcare delivery by enhancing diagnostic accuracy, streamlining administrative operations, and increasing patient engagement. From 2017-2021, the healthcare sector received more private AI investment globally than any other, attracting $28.9 billion

Enthusiasm for these new healthcare technologies has long been accompanied by concerns about patient safety, harmful biases, and data security. Regulators face the challenge of fostering these innovative tools while preserving safe, fair, and secure machine learning algorithms within the constraints of regulatory frameworks created in an era of physical devices, paper records, and analog data. The rapid adoption of AI into healthcare processes creates an urgent need to review existing regulatory frameworks.

Recognizing this gap, the Stanford Institute for Human-Centered AI (HAI) convened a select group of 55 leading policymakers, scientists, healthcare providers, ethicists, AI developers, and patient advocates for a closed-door workshop in May 2024. The meeting was hosted by HAI’s Healthcare AI Policy Steering Committee—a multidisciplinary committee of Stanford faculty that works to advance policy and research in these areas—to highlight key AI policy gaps and to galvanize support for regulatory changes.

Read a related conversation with HAI Associate Director Curt Langlotz: How Can We Better Regulate Health AI?

 

Under the Chatham House Rule, participants discussed shortcomings in federal healthcare AI policy in three areas: AI software for clinical decision support, healthcare enterprise AI tools, and patient-facing AI applications. Below, we summarize the key themes, policy considerations, and participant opinions for each regulatory area. 

Like Driving a 1976 Chevy Impala on 2024 Roads

Healthcare is one of the most highly regulated industries in the United States. And the industry’s wide-ranging regulatory frameworks are already being applied to AI. 

The Food and Drug Administration (FDA) has regulatory responsibility for many software systems primarily through its 510(k) device clearance process, which considers software as a medical device (SaMD). AI applications used in administrative and clinical enterprise contexts must adhere to rules from the Office of the National Coordinator for Health Information Technology that, for example, mandates algorithmic transparency. The governance of direct-to-consumer health AI tools falls under various consumer product frameworks, although little enforcement has yet occurred in this nascent area.

These regulatory frameworks are outdated. The FDA’s regulatory authority, established in 1976, was designed to regulate hardware devices, not software reliant on training data and requiring meticulous ongoing performance monitoring. Similarly, the Health Insurance Portability and Accountability Act (HIPAA)—a 1996 law that set national standards for the privacy and security of health data—predates the explosion of digital health information. Its provisions did not foresee the need for vast amounts of patient records in order to train machine learning algorithms.

Regulators are effectively driving a Chevy Impala on 2024 roads, struggling to adapt to today’s road conditions, one participant noted. The traditional regulatory paradigms in healthcare urgently need to adapt to a world of rapid AI development. The vast majority of workshop participants believe that a new or substantially changed regulatory framework is necessary for effective healthcare AI governance.

Bar chart showing most respondents think major changes to existing regulation can effectively govern AI

Use Case 1: AI in Software as a Medical Device

Developers of novel AI-powered medical devices with diagnostic capabilities currently face a major challenge: The FDA device clearance process requires them to submit evidence for each individual diagnostic capability. For AI products with hundreds of diagnostic capabilities, such as an algorithm that can detect substantially all abnormalities that might appear on a chest X-ray, submitting each one for regulatory clearance is not commercially feasible. This can result in global software companies bringing to market downgraded, less innovative products and hinder U.S. AI medical device innovation. 

Workshop participants proposed new policy approaches to help streamline market approval for these multifunctional software systems while still ensuring clinical safety. First, public-private partnerships will be crucial to managing the evidentiary burden of such approval, with a potential focus on advancing post-market surveillance. Second, participants supported better information sharing during the device clearance process. Sharing details regarding test data and device performance during the clearance process could enable healthcare providers to better assess whether software tools will operate safely in their practices. Although close to 900 medical devices that incorporate AI or machine learning software have been cleared by the FDA, clinical adoption has been slow as healthcare organizations have limited information on which to base purchasing decisions. 

Finally, some participants called for more fine-grained risk categories for AI-powered medical devices, the vast majority of which are currently classified as Class II devices with moderate risk. Clinical risk varies greatly between different types of AI/machine learning software devices, necessitating a more tailored approach. For example, an algorithm that measures the dimensions of a blood vessel for later human review is lower risk than an algorithm that triages mammograms to bypass human review. 

Use Case 2: AI in Enterprise Clinical Operations and Administration

Should a human always be in the loop when autonomous AI tools are integrated in clinical settings? Fully autonomous AI technologies, such as ones that diagnose eye conditions or auto-report normal chest X-rays, promise to address grave doctor resource shortages. Other forms of automation, such as ambient intelligence technologies that draft responses to patient emails or capture progress notes during doctor-patient interactions, also greatly improve efficiencies in clinical settings. 

Some participants argued for human oversight to ensure safety and reliability, while others warned that human-in-the-loop requirements could increase the administrative burden on doctors and make them feel less accountable for resulting clinical decisions. Some identified laboratory testing as a successful hybrid model, where a device is overseen by a physician and undergoes regular quality checks. Any out-of-range values are checked by a human.

Bar chart showing most respondents think AI doesn't need a human in the loop if it has safeguards in place

The integration of AI in clinical settings also begs the question of what levels of transparency healthcare providers and patients need to use AI tools safely. What responsibilities do developers have to communicate information about model design, functionality, and risks—for example, through model cards, which are akin to a “nutrition label” healthcare providers can use to make informed decisions about whether to use an AI tool? 

Additionally, should patients be told when AI is being used in any stage of their treatment, and, if so, how and when? Patients often delegate decisions about what technology to use—from scalpels to decision support pop-up windows—to their caregivers and the healthcare organizations they work for. And less sophisticated forms of AI are already used throughout the healthcare system, such as rule-based systems that warn of drug-to-drug interactions. Yet many participants felt that in some circumstances, such as an email message that purports to come from a healthcare provider, the patient should be informed that AI played a role.

Use Case 3: Patient-Facing AI Applications

An increasing number of patient-facing applications, such as mental health chatbots based on LLMs, promise to democratize healthcare access or to offer new services to patients through mobile devices. And yet, no targeted guardrails have been put in place to ensure these patient-facing, LLM-powered applications are not giving out harmful or misleading medical information—even or especially when the chatbots claim they do not offer medical advice, despite sharing information in a manner that closely resembles medical advice. 

Clarification of the regulatory status of these patient-facing products is urgently needed. Yet workshop participants disagreed over whether generative AI applications, for example, should be governed more like medical devices or medical professionals.

Pie chart showing 56% of respondents think health AI should be governed like medical professionals

The patient perspective is crucial to ensuring the trustworthiness of healthcare AI applications and the healthcare system more broadly. Many participants noted that patients rarely participate in the development, deployment, or regulation of patient-facing AI applications. The needs and viewpoints of entire patient populations must be considered to ensure regulatory frameworks address health disparities caused or exacerbated by AI.

What Comes Next?

These are only a few of the many questions and concerns surrounding the future of healthcare AI regulation. Much more multidisciplinary research and multistakeholder discussions are needed to answer these questions and develop feasible policy solutions that assure safety while supporting a nimble approach that brings innovative, life-saving AI applications to market. HAI and its Healthcare AI Policy Steering Committee will continue to conduct research into these areas to support policy and regulatory frameworks that lead to the safe, equitable, and effective use of healthcare AI.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.