AI Medical Advice Flawed by Authoritative Misinformation
Analysis based on 9 articles · First reported Feb 09, 2026 · Last updated Feb 10, 2026
The study highlights significant risks in AI's application in healthcare, potentially leading to increased scrutiny and regulation for AI developers like OpenAI. This could drive demand for more robust, verifiable AI solutions in the medical sector.
A new study published in The Lancet Digital Health found that artificial intelligence tools are more prone to providing incorrect medical advice when misinformation originates from what the software perceives as an authoritative source, such as doctors' discharge notes. Researchers, including Eyal Klang and Girish Nadkarni from the Icahn School of Medicine at Mount Sinai, tested 20 large language models. They discovered that AI models were more easily misled by errors in realistic-looking medical documents (47% propagation) than by misinformation from social media platforms like Reddit (9% propagation). The study also noted that authoritative phrasing in user prompts increased the likelihood of AI agreeing with false information. OpenAI's GPT models were identified as the least susceptible to these false claims. The findings underscore the critical need for built-in safeguards and rigorous verification processes for AI systems before their widespread integration into clinical care.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard