Days after Anthropic unveiled its Claude Mythos AI model, OpenAI introduced GPT-5.4-Cyber, a cybersecurity-focused model that will be offered to many defenders.
OpenAI announced that it’s scaling its Trusted Access for Cyber program to thousands of verified defenders and hundreds of security teams. They will be given access to GPT-5.4-Cyber, a fine-tuned variant of GPT-5.4 that relaxes the usual guardrails for legitimate cybersecurity work.
GPT-5.4-Cyber also provides new capabilities such as binary reverse engineering, which enables users to analyze compiled executable software for vulnerabilities and malicious behavior.
The new AI model is initially being offered on a limited, iterative basis to vetted security vendors, organizations, and researchers.
Individual defenders who want to enroll into the Trusted Access for Cyber program and test GPT‑5.4‑Cyber can apply through chatgpt.com/cyber via an identity verification process, while enterprise teams must go through their OpenAI account representative.
The AI giant’s announcement centers on three guiding principles: democratized access (making tools widely available through objective verification rather than manual gatekeeping), iterative deployment (learning from real-world use and improving over time), and ecosystem resilience (supporting the broader defender community through grants, open source contributions, and tools like Codex Security).
The announcement comes in the wake of Anthropic’s release of Claude Mythos, a new and powerful AI model allegedly capable of autonomously discovering thousands of zero-day vulnerabilities. This led Anthropic to withhold its public release and instead offer it only to a few dozen major organizations through a restricted program called Project Glasswing.
Both Anthropic and OpenAI are prioritizing defensive use while managing dual-use risks, but the latter believes advanced defensive tools should reach as many legitimate defenders as possible.
“We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves. Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability,” OpenAI explained.
The company has not shared information about the performance of its GPT-5.4-Cyber model, but said its Codex Security platform, which automatically scans codebases and proposes fixes, has already helped identify over 3,000 critical and high-severity vulnerabilities across the open source ecosystem.
Related: Claude Code, Gemini CLI, GitHub Copilot Agents Vulnerable to Prompt Injection via Comments
Related: ‘By Design’ Flaw in MCP Could Enable Widespread AI Supply Chain Attacks
Related: ‘Mythos-Ready’ Security: CSA Urges CISOs to Prepare for Accelerated AI Threats

