July 16, 2024

Advancing Digital Excellence

Pioneering Technological Innovation

The Future of LLMs in Healthcare: 5 Clinical Use Cases

3 min read

LLMs, Data Privacy, Patient Safety and Hallucinations 

Earlier this year, the World Health Organization released guidelines for the ethical use of LLMs and other AI in healthcare. WHO recognized the potential for LLMs while highlighting a range of risks, including but not limited to inaccuracy, bias, lack of accountability, threats to privacy and a further widening of the digital divide. “How LMMs are accessed and used is new, with both novel benefits and risks that societies, health systems and end users may not yet be prepared to address fully,” WHO noted.

Ananth says it’s critical for both developers and users of LLMs to be accountable for their actions. This includes the harm that may be caused by using LLM outputs without having a human in the loop. He recommends six safeguards:

  • Set guidelines on where LLMs and generative AI can and cannot be used throughout the organization.
  • Use diverse data sets and apply both rigorous testing and human feedback to them for “reinforced learning.”
  • Protect data sets from cyberattacks by ensuring only authorized individuals and systems can access them and requiring identity verification prior to access.
  • Integrate “explainable AI techniques” so end users can understand why an LLM made a given recommendation.
  • Maintain a transparent development process with “continuous dialogue” about the capabilities and limitations of LLMs.
  • Monitor and evaluate the performance of LLMs, particularly in the way they impact outcomes, to maintain compliance with regulatory and ethical standards.

One area of concern for LLMs is “hallucinations” — model outputs that are flat-out wrong. (Think of images of people with seven fingers or three arms.) The stakes are certainly high in healthcare, particularly when it comes to making a diagnosis or determining a billing code. That explains in large part why physicians using LLMs to respond to patient portal messages take the time to review model outputs.

Prashant Natarajan, vice president of strategy and products for H2O.ai, says developers and users should recognize that hallucinations are an inherent part of LLMs, and keep that in mind as they deploy them.

“Generative AI models are designed to process large amounts of text data. They do a good job predicting the next token in a sequence,” such as the letters most likely to come after “Q” in a word. “It’s not a mathematical prediction model.”

LLMs need to be tested, Natarajan says, and organizations need to look at the hallucinations that emerge. “In some cases, you want hallucinations because you can use known techniques to reduce them. You need to understand where hallucinations will be useful. You won’t know unless you do it.”

EXPLORE: Medical schools train the next generation of clinicians to better understand AI.

The Future of LLMs in Healthcare

An analysis from Stanford suggested there may be significant untapped potential for LLMs in healthcare. Many LLMs to date have been used for tasks such as augmenting diagnostics or communicating with patients. Far fewer models are addressing the administrative tasks that contribute to burnout in clinicians.

“We urgently need to set up evaluation loops for LLMs where models are built, implemented and then continuously evaluated via user feedback,” the study’s authors concluded.

Natarajan says LLMs are useful now, albeit in a limited context. The “frontier,” he says, is when LLMs are further embedded in the applications that clinicians and patients use every day, appearing to complete a task and then disappearing when they’re done.

“AI is moving to interaction, behavior, context and intelligent agents. It’s connecting to behaviors, reactions and emotions,” he says. “The world is expanding beyond writing emails.”

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.