This event is archived. Final snapshot from when the story concluded. View on Dashboard
Tech AI medical study

Oxford Study: AI Medical Advice 'Dangerous'

Analysis based on 32 articles · First reported Feb 09, 2026 · Last updated Feb 11, 2026

Sentiment
-20
Attention
4
Articles
32
Market Impact
Direct
Live prominence charts, article sentiment distribution, and event development timeline available on the NewsDesk Dashboard

The study's findings suggest a cautious outlook on the immediate integration of AI chatbots into healthcare, potentially dampening investor enthusiasm for AI companies like OpenAI, Meta Platforms, and Cohere in the medical sector. It highlights the need for significant advancements in AI's ability to handle complex human interactions before widespread adoption in sensitive fields.

Artificial intelligence Healthcare Technology

A new study led by the University of Oxford and published in Nature Medicine found that using AI chatbots for medical advice can be 'dangerous' due to their tendency to provide inaccurate and inconsistent information. Researchers, including Rebecca Payne and Adam Mahdi, tested large language models like OpenAI's GPT-4o, Meta Platforms' Llama 3, and Cohere's Command R+ in both controlled and real-world scenarios. While AI models showed high technical accuracy in identifying conditions, their performance deteriorated significantly when used by human participants, who often provided incomplete information. The study concluded that AI tools did not help people make better health decisions than traditional methods like internet searches or consulting the United Kingdom===National Health Service, underscoring a 'huge gap' between AI's theoretical potential and its practical application in healthcare.

90 University of Oxford published study on AI chatbot health advice
80 Rebecca Payne warned about dangers of AI for health advice
70 OpenAI developed GPT-4o chatbot
70 Meta Platforms developed Llama 3 chatbot
70 Adam Mahdi highlighted gap between AI potential and pitfalls
50 David Shaw advised public to trust reliable medical sources
ngo
The University of Oxford led the research through its Oxford Internet Institute and Nuffield Department of Primary Care Health Sciences, publishing findings in Nature Medicine. This study highlights the institution's role in evaluating emerging technologies like AI in healthcare.
Importance 90 Sentiment 10
per
Rebecca Payne, a co-author from University of Oxford, stated that 'AI just isn't ready to take on the role of the physician' and warned about the dangers of using large language models for symptoms. Her statements are central to the study's findings.
Importance 80 Sentiment 10
priv
OpenAI's GPT-4o was one of the large language models tested, which, despite high technical accuracy in identifying conditions, performed poorly in real-world patient interactions, suggesting limitations in its practical application for medical advice.
Importance 70 Sentiment -10
stock
Meta Platforms' Llama 3 was another large language model evaluated in the study, showing similar discrepancies between theoretical accuracy and real-world performance in providing medical advice.
Importance 70 Sentiment -10
per
Adam Mahdi, co-author and associate professor at University of Oxford, highlighted the 'huge gap' between AI's potential and its pitfalls when used by people for health advice, emphasizing the need for further research.
Importance 70 Sentiment 10
priv
Cohere's Command R+ was included in the study as one of the large language models, demonstrating the general trend of AI chatbots struggling with practical medical decision-making despite high technical knowledge.
Importance 60 Sentiment -10
per
David Shaw, a bioethicist at Maastricht University, commented on the study, emphasizing the 'real medical risks posed to the public by chatbots' and advising reliance on reliable sources like the United Kingdom===National Health Service.
Importance 50 Sentiment 10
+ 9 more entities View on Dashboard
NEWSDESK
Track this event live

Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.

Open Dashboard

About NewsDesk

NewsDesk is a news intelligence platform that converts raw news articles into structured data. It tracks events, entities, and the relationships between them, with sentiment and attention metrics derived from thousands of articles. Pages on this site are daily static snapshots from the platform's live database. For real-time tracking, search, and alerts, the full dashboard is at app.newsdesk.dev.