US Court Rules AI Chatbot Inputs Not Privileged
Analysis based on 24 articles · First reported Apr 15, 2026 · Last updated Apr 16, 2026
The ruling by Judge Jed S. Rakoff creates significant uncertainty for the legal tech market, particularly for AI chatbot providers like Anthropic and OpenAI, as it highlights privacy concerns and limits their utility for sensitive legal work. It also forces law firms to re-evaluate their advice to clients regarding the use of AI, potentially increasing demand for secure, legally compliant AI solutions or traditional legal services.
A recent federal court ruling by US District Judge Jed S. Rakoff in New York has determined that communications typed into AI chatbots, such as Anthropic's Claude, are not protected by attorney-client privilege. This decision stems from a case involving Bradley Heppner, the former chair of bankrupt GWG Holdings, who used Claude to draft reports for his criminal defense. Judge Rakoff ruled that no attorney-client relationship exists between an AI user and a platform, and that platforms like Claude and OpenAI expressly state users have no expectation of privacy. This ruling has prompted US law firms to issue urgent advisories to clients, warning about the legal risks of using AI for litigation-related communications, as such information could be accessible to prosecutors or opposing parties.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard