OpenAI Launches Cybersecurity Preview To Challenge Anthropic's Mythos
OpenAI introduced a preview of GPT-5.5-Cyber on Thursday to a limited number of cybersecurity defenders, as part of its "broader work to build the core infrastructure for AI."
For more sensitive workflows, OpenAI said it is offering GPT‑5.5‑Cyber in a limited preview with stronger verification and account-level controls. The preview is not meant to significantly increase cybersecurity capability beyond GPT-5.5, however it is trained to be more permissive for security-related tasks.
This news comes after the launch of Anthropic's unreleased AI model called Claude Mythos Preview, which works to hunt and fix software flaws in an effort to "reshape" cybersecurity, Anthropic stated.
"We are focused on providing proportional safeguards and access to empower cyber defenders to protect society, and our approach has been informed by conversations with cybersecurity and national security leaders across federal and state government and major commercial entities," OpenAI said in its announcement.
"More specialized access becomes relevant only when authorized workflows still run into refusals. This occurs with higher risk workflows such as red teaming and penetration testing, where defenders may need to go beyond analysis and validate exploitability in a controlled environment. GPT‑5.5‑Cyber is designed to facilitate these more specialized dual-use workflows," the company stated.
This launch is part of OpenAI's Trusted Access for Cyber program. The pilot program was announced in February in an effort to "enhance baseline safeguards for all users while piloting trusted access for defensive acceleration."
Beginning June 1, individuals using the most permissive cyber models through the program will need to enable Advanced Account Security, the company added.
This move comes as AI capabilities surrounding cybersecurity have reached a "tipping point." Government officials have recently raised concerns that artificial intelligence tools could be misused to disrupt critical infrastructure such as financial systems or power grids.
Both OpenAI and Anthropic have engaged with government agencies (including defense and public-sector organizations) to deploy or evaluate AI systems in controlled settings, often with a focus on security, safety, and sensitive use cases.
Photo Courtesy: Camilo Concha on Shutterstock.com
