US Government Imposes Strict AI Contract Rules
Analysis based on 20 articles · First reported Mar 07, 2026 · Last updated Mar 07, 2026
The new regulations will likely increase scrutiny on AI companies seeking government contracts, potentially impacting their revenue streams if they cannot comply with the 'any lawful use' clause. For Anthropic, the immediate impact is negative due to the termination of its federal contract and the 'supply-chain risk' designation.
The Trump administration has established strict rules for civilian artificial intelligence contracts, requiring AI companies to allow 'any lawful' use of their models by the U.S. government. This development follows a dispute between the United States===United States Department of Defense and Anthropic, where Anthropic's insistence on safeguards was deemed excessive. Consequently, the United States===United States Department of Defense designated Anthropic a 'supply-chain risk,' barring government contractors from using its technology for military work. The United States===General Services Administration (GSA) has also terminated Anthropic's OneGov deal, ending its availability to federal branches. The new guidelines, which mirror potential Pentagon measures for military contracts, mandate that contractors must not encode partisan judgments into AI systems and must disclose any non-U.S. federal compliance modifications.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard