Oxford Study: AI Medical Advice 'Dangerous'
Analysis based on 32 articles · First reported Feb 09, 2026 · Last updated Feb 11, 2026
The study's findings suggest a cautious outlook on the immediate integration of AI chatbots into healthcare, potentially dampening investor enthusiasm for AI companies like OpenAI, Meta Platforms, and Cohere in the medical sector. It highlights the need for significant advancements in AI's ability to handle complex human interactions before widespread adoption in sensitive fields.
A new study led by the University of Oxford and published in Nature Medicine found that using AI chatbots for medical advice can be 'dangerous' due to their tendency to provide inaccurate and inconsistent information. Researchers, including Rebecca Payne and Adam Mahdi, tested large language models like OpenAI's GPT-4o, Meta Platforms' Llama 3, and Cohere's Command R+ in both controlled and real-world scenarios. While AI models showed high technical accuracy in identifying conditions, their performance deteriorated significantly when used by human participants, who often provided incomplete information. The study concluded that AI tools did not help people make better health decisions than traditional methods like internet searches or consulting the United Kingdom===National Health Service, underscoring a 'huge gap' between AI's theoretical potential and its practical application in healthcare.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard