Australia Imposes Strict AI Age Verification
Analysis based on 17 articles · First reported Mar 01, 2026 · Last updated Mar 02, 2026
The new Australian AI regulations are expected to increase compliance costs for AI companies, potentially impacting their profitability and market access in Australia. 'Gatekeeper services' like Apple Inc. and Alphabet Inc. may face pressure to enforce these rules, affecting their app store and search engine operations.
Australia's Australia===ESafety Commissioner is implementing new online safety rules for AI services, effective March 9, requiring age verification and content restrictions for users under 18. AI companies, including OpenAI's ChatGPT and companion chatbots, must prevent access to pornography, extreme violence, self-harm, and eating disorder content or face fines up to A$49.5 million ($35 million). The regulator may compel search engines and app stores, such as those operated by Apple Inc. and Alphabet Inc., to block non-compliant AI services. A Reuters review found that over half of the 50 most popular text-based AI products had not publicly announced steps to comply. Companies like OpenAI and Character.ai have already faced lawsuits related to harmful interactions with young users. This initiative follows Australia's earlier ban on social media for teenagers and aims to address concerns about AI's impact on youth mental health, with experts from Royal Melbourne Institute of Technology noting that many AI tools are designed without sufficient safety controls.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard