Study warns AI Chatbots are Unsafe for Medical Advice

Oxford researchers warn that using AI chatbots for medical advice can be dangerous, inaccurate, and risky for patient safety.

Study warns AI Chatbots are Unsafe for Medical Advice F

Health decisions are rarely straightforward.

Using artificial intelligence chatbots to seek medical advice can be dangerous, according to new research from the University of Oxford, raising concerns about patient safety.

The study found that relying on AI for medical decision-making presents clear risks, largely due to inaccurate and inconsistent information shared by chatbots.

Researchers warned that while AI tools appear confident, they often fail to deliver reliable guidance when people seek help for real health symptoms.

The research was led by experts from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford.

Dr Rebecca Payne, a GP and co-author of the study, said the findings challenge growing public trust in AI-powered health advice.

She explained that, despite widespread hype, artificial intelligence is not ready to replace trained medical professionals in clinical decision-making.

Dr Payne warned that patients asking large language models about symptoms risk receiving incorrect diagnoses and missing signs that urgent medical help is needed.

The study tested how people interact with AI when faced with health-related decisions, highlighting the gap between theoretical knowledge and real-world care.

Nearly 1,300 participants were asked to identify possible health conditions and decide on appropriate actions across different medical scenarios.

Some participants used large language model chatbots to obtain potential diagnoses and advice, while others followed traditional routes, including consulting a GP.

Researchers then assessed the quality of decisions made, focusing on accuracy, safety, and whether users recognised when urgent care was required.

The results showed that AI systems often delivered a mix of helpful and harmful information, which many users struggled to separate or question.

Even when correct details were included, misleading suggestions frequently appeared alongside them, creating confusion rather than clarity for patients.

Researchers noted that while AI chatbots perform well in standardised medical knowledge tests, real-world use tells a different story.

They warned that applying AI directly to personal health concerns could pose serious risks to individuals seeking reassurance or guidance.

The study stressed that medical advice requires judgment, context, and human interaction, areas where AI still falls short.

Dr Payne said the findings highlight the difficulty of building AI systems that can truly support people in sensitive, high-stakes areas like healthcare.

She added that health decisions are rarely straightforward and often depend on recognising nuance, urgency, and emotional cues.

The study’s lead author, Andrew Bean from the Oxford Internet Institute, said even top-performing AI models struggle when interacting with humans.

He explained that understanding how people describe symptoms, emotions, and uncertainty remains a major challenge for current AI systems.

Bean said the research aims to encourage the development of safer and more responsible AI tools.

For South Asian communities, where access barriers, language differences, and cultural stigma already affect healthcare, these risks may be even greater.

Experts caution that AI should not replace professional medical advice, especially for communities already navigating complex health inequalities.

The researchers emphasised that AI may still play a supportive role in healthcare, but only alongside trained professionals and strict safeguards.

They urged patients to treat chatbot advice with caution and to seek help from qualified doctors when experiencing symptoms or health concerns.

Managing Editor Ravinder has a strong passion for fashion, beauty, and lifestyle. When she's not assisting the team, editing or writing, you'll find her scrolling through TikTok.





  • Play DESIblitz Games
  • What's New

    MORE

    "Quoted"

  • Polls

    As a pay monthly mobile tariff user which of these apply to you?

    View Results

    Loading ... Loading ...
  • Share to...