Snapshot from Apr 21, 2026 at 07:00 UTC. For live data and tracking: View Live
Tech AI chatbot study

Study Reveals AI Chatbots Give Problematic Medical Advice

Analysis based on 25 articles · First reported Apr 14, 2026 · Last updated Apr 15, 2026

Sentiment
-30
Attention
4
Articles
25
Market Impact
Direct
Live prominence charts, article sentiment distribution, and event development timeline available on the NewsDesk Dashboard

The study's findings are likely to increase regulatory scrutiny on AI companies like Alphabet Inc., Meta Platforms, OpenAI, Xai, and High-Flyer, potentially leading to new guidelines for AI in healthcare. This could impact investor confidence in the AI sector, especially for companies developing public-facing AI tools, as concerns about misinformation and liability rise.

Artificial intelligence Healthcare Technology

A new study published in BMJ Open, conducted by researchers including those from The Lundquist Institute for Biomedical Innovation, University of Alberta, and Loughborough University, found that nearly half of the medical information provided by five popular AI chatbots was problematic. The chatbots tested were Alphabet Inc.'s Gemini, High-Flyer's DeepSeek, Meta Platforms' Meta AI, OpenAI's ChatGPT, and Xai's Grok. Grok returned the most problematic responses (58%), followed by ChatGPT (52%) and Meta AI (50%). The study highlighted that chatbots often 'hallucinate,' generating incorrect or misleading information, and frequently present a false balance between science-based and non-science-based claims. Researchers emphasized the urgent need for public education, professional training, and regulatory oversight to prevent generative AI from amplifying misinformation in public health.

90 Xai Grok chatbot generated problematic responses
90 OpenAI ChatGPT chatbot generated problematic responses
90 Meta Platforms Meta AI chatbot generated problematic responses
90 Alphabet Inc. Gemini chatbot generated problematic responses
90 High-Flyer DeepSeek chatbot generated problematic responses
85 Xai generated significantly more highly problematic responses via Grok
70 Alphabet Inc. provided inaccurate and incomplete medical information via Gemini
70 Meta Platforms provided inaccurate and incomplete medical information via Meta AI
+ 4 more actions View on Dashboard
subs
OpenAI===ChatGPT, a product of OpenAI, was one of the five AI chatbots evaluated in the study. The study found that about 50% of its medical advice was problematic, raising concerns about its use in healthcare.
Importance 80 Sentiment -20
stock
Alphabet Inc.'s Gemini chatbot was included in a study that found half of AI chatbot medical advice problematic. While Gemini performed better than some competitors, the overall negative findings could impact public trust in AI tools, including those from Alphabet Inc.
Importance 70 Sentiment -20
stock
Meta Platforms' Meta AI chatbot was part of a study revealing significant inaccuracies in medical advice. The study found 50% of its responses problematic, which could negatively affect Meta Platforms' reputation in the AI sector and public perception of its AI offerings.
Importance 70 Sentiment -20
priv
OpenAI's ChatGPT was identified in a study as providing problematic medical advice in 52% of cases. This finding raises concerns about the reliability of ChatGPT for health-related queries, potentially impacting OpenAI's standing and the adoption of its AI in sensitive fields.
Importance 70 Sentiment -20
priv
Xai's Grok chatbot showed the highest percentage of problematic responses (58%) in a study on medical advice accuracy. This poor performance could significantly damage Grok's reputation and Xai's credibility in the competitive AI market.
Importance 70 Sentiment -30
subs
Gemini was included in the study of AI chatbots providing problematic medical advice. Its performance contributed to the overall finding that about half of AI chatbot responses were flawed.
Importance 70 Sentiment -20
subs
Meta Platforms===Meta AI was one of the chatbots evaluated, with the study noting it had two refusals to answer questions. Its inclusion highlights the broader issues with AI chatbots giving medical advice.
Importance 70 Sentiment -20
+ 10 more entities View on Dashboard
NEWSDESK
Track this event live

Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.

Open Dashboard

About NewsDesk

NewsDesk is a news intelligence platform that converts raw news articles into structured data. It tracks events, entities, and the relationships between them, with sentiment and attention metrics derived from thousands of articles. Pages on this site are daily static snapshots from the platform's live database. For real-time tracking, search, and alerts, the full dashboard is at app.newsdesk.dev.