ChatGPT has potentially dangerous consequences in healthcare: Researchers

“There's a risk,” said ER pediatrician Dr. Esli Osmanlliu about using ChatGPT in healthcare and medical writing. A study shows the chatbot provided factual errors and fabricated references. Brittany Henriques reports.

The artificial intelligence model ChatGPT has taken the world by storm – with its ability to write up just about anything you ask it.

But what happens when something as important as the healthcare sector starts including it for medical writing and in-hospital care?

Researchers from Sainte-Justine and the Montreal Children’s Hospital conducted a study – in which they asked ChatGPT 20 medical questions.

They found the chatbot provided answers of limited quality, including factual errors and fabricated references.

“When we tried put everything together and looked on the internet whether that study had actually been published, we realized that in over 70 per cent of the cases the references had been fabricated. It seemed real but in fact it was not,” explained Dr. Esli Osmanlliu, an ER Pediatrician at Montreal Children’s Hospital and a Scientist MUHC Research Institute.

“These results are alarming given that trust is a pillar of scientific communication. ChatGPT users should pay particular attention to the references provided before integrating them into medical manuscripts,” says Dr. Jocelyn Gravel, lead author of the study and emergency physician at CHU Sainte-Justine.

Montreal researchers say – it has the ability to both be dangerous and helpful – and needs to be used with caution.

“There’s a risk of propagating misinformation and disinformation, if we don’t check our sources we see that issue in many public debates in the past few years so certainly tools like ChatGPT can exacerbate that issue,” said Dr. Osmanlliu.

“If you lose that transparency and that trust in scientific writing, you also lose the trust of the public.”

To test it out – CityNews asked ChatGPT what the dangers were in using its platform in medicine.

This is what is had to say:

While ChatGPT and similar AI language models hold great potential for various industries, including medicine, there are several potential dangers and challenges associated with their use in this field – such as:

  • Inaccurate or misleading information;
  • Lack of contextual understanding;
  • Legal and ethical concerns;
  • Overreliance on AI;
  • Privacy and data security;
  • Bias and disparities;
  • Lack of accountability;
  • Dynamic and evolving field;
  • Loss of human interaction; and
  • Resistance from medical professionals.

 

“In many cases artificial intelligence can be a mirror of our society especially large language models like ChatGPT, it is trained o massive amounts of data, we’re talking about millions of data points so that it acts with such a humanes but the accuracy can suffer sometimes,” explained Dr. Osmanlliu.

Despite the issues with ChatGPT in the medical field at this moment – some experts say it has the potential to greatly benefit the healthcare system and the patients it serves.

“In healthcare I can think of a lot of areas where tools like this one can make us more efficient and at the end of the day provide more human care. Tools like that can help some potentially automatable tasks, that currently are taking hours of precious provider time.”

Top Stories

Top Stories