Study Reveals AI Chatbots Give Problematic Medical Advice
Analysis based on 25 articles · First reported Apr 14, 2026 · Last updated Apr 15, 2026
The study's findings are likely to increase regulatory scrutiny on AI companies like Alphabet Inc., Meta Platforms, OpenAI, Xai, and High-Flyer, potentially leading to new guidelines for AI in healthcare. This could impact investor confidence in the AI sector, especially for companies developing public-facing AI tools, as concerns about misinformation and liability rise.
A new study published in BMJ Open, conducted by researchers including those from The Lundquist Institute for Biomedical Innovation, University of Alberta, and Loughborough University, found that nearly half of the medical information provided by five popular AI chatbots was problematic. The chatbots tested were Alphabet Inc.'s Gemini, High-Flyer's DeepSeek, Meta Platforms' Meta AI, OpenAI's ChatGPT, and Xai's Grok. Grok returned the most problematic responses (58%), followed by ChatGPT (52%) and Meta AI (50%). The study highlighted that chatbots often 'hallucinate,' generating incorrect or misleading information, and frequently present a false balance between science-based and non-science-based claims. Researchers emphasized the urgent need for public education, professional training, and regulatory oversight to prevent generative AI from amplifying misinformation in public health.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard