Finance
Study finds ChatGPT provided inaccurate answers to medication questions
Answers provided by OpenAI’s ChatGPT to a series of drug-related questions posed as part of a study by pharmacists found that nearly three-fourths of responses were incomplete or inaccurate.
ChatGPT, which uses generative artificial intelligence (AI) to form responses to users’ prompts using data on the Internet, was challenged by researchers at the American Society of Health-System Pharmacists with real questions posed to Long Island University’s College of Pharmacy drug information service in a 16-month timeframe in 2022 and 2023. The study was presented at the ASHP’s Midyear Clinical Meeting on Tuesday.
Pharmacists first researched and answered 45 questions, and those responses were reviewed by a second investigator to serve as the standard by which ChatGPT’s answers would be judged. Six of those questions were left out due to a lack of literature to provide a data-driven response, leaving 39 questions for ChatGPT to answer.
The study found that ChatGPT provided satisfactory answers to just 10 of the 39 questions posed. Of the other 29 questions, there were 11 cases in which ChatGPT’s responses didn’t directly address the question, 10 instances where it provided an inaccurate response, plus 12 incomplete answers. Researchers also asked ChatGPT to provide references in its responses, which it did in just eight of its answers — each of which included non-existent references per the study.
CHATGPT’S WILD FIRST YEAR
“Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” said Sara Grossman, PharmD, who was a lead author of the study and is an associate professor of pharmacy practice at Long Island University.
“Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources,” Grossman added.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
In one case, the researchers asked ChatGPT if there’s a risk of drug interaction between the COVID-19 antiviral Paxlovid and verapamil, which is a medication that lowers blood pressure, and the chatbot said no interactions had been reported for that combination of drugs.
“In reality, these medications have the potential to interact with one another, and combined use may result in excessive lowering of blood pressure,” Grossman said. “Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect.”
WHAT IS CHATGPT?
The ASHP study’s findings show that while AI tools like ChatGPT have shown potential in pharmacy and other medical settings, pharmacists should evaluate the use of various AI tools in medication-related use cases and talk to patients about trustworthy sources of information about their medication, according to Gina Luchen, PharmD, ASHP director of digital health and data.
“AI-based tools have the potential to impact both clinical and operational aspects of care,” Luchen said. “Pharmacists should remain vigilant stewards of patient safety, by evaluating the appropriateness and validity of specific AI tools for medication-related uses, and continuing to educate patients on trusted sources for medication information.”
A spokesperson for ChatGPT-maker OpenAI told FOX Business, “We guide the model to inform users that they should not rely on its responses as a substitute for professional medical advice or traditional care.”
Additionally, OpenAI’s usage policies note that “OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions.”
Read the full article here