Back to home
Tech1 source

OpenAI Restricts GPT-5.5 Cyber Access To Critical Security Professionals

OpenAI has announced a phased rollout for its latest cybersecurity testing tool, GPT-5.5 Cyber, but the launch comes with significant strings attached. Despite previously criticizing competitors like Anthropic for limiting access to high-end models, OpenAI is restricting this new release exclusively to a select group of "critical cyber defenders." The company cites safety concerns as the primary reason for the gated access, aiming to prevent the tool from being weaponized by bad actors.

The move marks a notable shift in the industry's ongoing tension between open innovation and defensive security. By keeping GPT-5.5 Cyber out of the general public's hands, OpenAI aims to help infrastructure and security professionals fortify their systems against AI-driven threats. However, the decision has drawn scrutiny from those who believe broader access is necessary for independent research and red-teaming efforts.

Industry experts are now watching to see how OpenAI defines "critical defenders" and whether this restrictive model becomes the blueprint for future high-capability releases. As the debate over AI safety versus accessibility intensifies, the effectiveness of these gated rollouts in preventing misuse remains to be seen. The strategy highlights the thin line companies must walk when releasing tools capable of both protecting and compromising digital infrastructure.

This report was originally published by TechCrunch.

Read the full story at the original source

Now Trending summarizes the news so you can scan in seconds. Full credit and reporting belongs to the original publishers.