UK Expands Online Safety Act to AI Chatbots
Analysis based on 11 articles · First reported Feb 16, 2026 · Last updated Feb 17, 2026
The United Kingdom's decision to expand online safety laws to include AI chatbots will likely increase compliance costs for AI providers, potentially impacting their profitability and investment in the United Kingdom. This regulatory shift could also set a precedent for other nations, influencing the global AI market.
The United Kingdom government announced an expansion of its Online Safety Act to include all AI chatbot providers, making them legally responsible for preventing the generation of illegal or harmful content. This move, spearheaded by Prime Minister Keir Starmer, aims to close a legal loophole exposed after Elon Musk's AI chatbot X (company)===Grok (chatbot) was used to create sexualized deepfakes. The existing Online Safety Act, which came into force in July, mandates age verification and criminalizes the creation of non-consensual intimate images. However, United Kingdom===Ofcom, the United Kingdom's media regulator, noted that some standalone AI chatbots were not covered. The new measures will extend enforcement to these systems, ensuring they meet the same legal standards as social media platforms. X (disambiguation), which hosts X (company)===Grok (chatbot), is already under investigation by United Kingdom===Ofcom and the International===European Commission for failing to meet safety obligations, though X (disambiguation) has since implemented new safeguards. The United Kingdom government is also considering a social media ban for those under 16, signaling a broader push for online child protection.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard