OpenAI Enhances ChatGPT Security
Analysis based on 13 articles · First reported Feb 17, 2026 · Last updated Feb 18, 2026
The introduction of enhanced security features by OpenAI for ChatGPT is expected to positively impact the AI software market by increasing user trust and mitigating cyber threats like prompt injection. This move could set a new standard for AI security, potentially influencing other companies to adopt similar safeguards.
OpenAI has announced two new security safeguards for its AI systems, including ChatGPT, to combat prompt injection attacks and data exfiltration. The first is an optional 'Lockdown Mode' designed for high-risk users, such as executives and security teams, which tightly restricts how ChatGPT interacts with external systems and disables potentially exploitable features. The second is the standardization of 'Elevated Risk' labels for certain features, like network access in Codex (software), to inform users of potential security concerns. These measures aim to provide clearer alerts and more control over information, reducing vulnerabilities as AI tools become more integrated with the web and third-party applications. Lockdown Mode is currently available for ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers, with plans to expand to consumer users.
Set up alerts, explore entity relationships, search across thousands of events, and build custom intelligence feeds.
Open Dashboard