This event is archived. Final snapshot from when the story concluded. View on Dashboard
Tech AI study

AI Medical Advice Flawed by Authoritative Misinformation

Analysis based on 9 articles · First reported Feb 09, 2026 · Last updated Feb 10, 2026

Sentiment
-20
Attention
4
Articles
9
Market Impact
Direct
Live prominence charts, article sentiment distribution, and event development timeline available on the NewsDesk Dashboard

The study highlights significant risks in AI's application in healthcare, potentially leading to increased scrutiny and regulation for AI developers like OpenAI. This could drive demand for more robust, verifiable AI solutions in the medical sector.

Healthcare Artificial intelligence Software

A new study published in The Lancet Digital Health found that artificial intelligence tools are more prone to providing incorrect medical advice when misinformation originates from what the software perceives as an authoritative source, such as doctors' discharge notes. Researchers, including Eyal Klang and Girish Nadkarni from the Icahn School of Medicine at Mount Sinai, tested 20 large language models. They discovered that AI models were more easily misled by errors in realistic-looking medical documents (47% propagation) than by misinformation from social media platforms like Reddit (9% propagation). The study also noted that authoritative phrasing in user prompts increased the likelihood of AI agreeing with false information. OpenAI's GPT models were identified as the least susceptible to these false claims. The findings underscore the critical need for built-in safeguards and rigorous verification processes for AI systems before their widespread integration into clinical care.

90 Eyal Klang co-led study on AI misinformation
90 Girish Nadkarni co-led study on AI misinformation
80 OpenAI demonstrated lower susceptibility to false claims
per
Eyal Klang co-led the study on AI's susceptibility to misinformation in medical advice. His findings highlight that AI systems prioritize the confidence of medical language over its accuracy, even when incorrect.
Importance 80 Sentiment 0
per
Girish Nadkarni, chief AI officer of Mount Sinai Health System, co-led the study, emphasizing the need for built-in safeguards in AI systems to verify medical claims before they are presented as facts.
Importance 80 Sentiment 0
priv
The Icahn School of Medicine at Mount Sinai is the institution where Eyal Klang and Girish Nadkarni conducted their study, contributing to significant research on AI accuracy in medicine.
Importance 70 Sentiment 0
priv
Mount Sinai Health System is the parent organization of the Icahn School of Medicine at Mount Sinai, where the study on AI's medical advice accuracy was conducted. Girish Nadkarni, its chief AI officer, co-led the research.
Importance 70 Sentiment 0
priv
OpenAI's GPT models were found to be the least susceptible and most accurate at detecting fallacies among the AI tools tested, indicating a relatively stronger performance in identifying misinformation.
Importance 60 Sentiment 10
stock
Reddit was used as a source for common health myths in the study, demonstrating that AI models were more suspicious of misinformation from social media compared to authoritative medical documents.
Importance 40 Sentiment 0
NEWSDESK
Track this event live

Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.

Open Dashboard

About NewsDesk

NewsDesk is a news intelligence platform that converts raw news articles into structured data. It tracks events, entities, and the relationships between them, with sentiment and attention metrics derived from thousands of articles. Pages on this site are daily static snapshots from the platform's live database. For real-time tracking, search, and alerts, the full dashboard is at app.newsdesk.dev.