Back to home
Tech1 source

OpenAI Launches High-Security Safeguards For Vulnerable At-Risk Users

OpenAI has introduced a new suite of advanced security features designed specifically for individuals at higher risk of targeted cyberattacks. The "Advanced Account Protection" program allows users to opt into more rigorous safeguards, most notably the requirement of a physical security key for logging into ChatGPT and other developer platforms. This move aligns the AI giant with security standards already common at major ecosystem players like Google and Apple.

Beyond hardware-based authentication, the new settings implement more stringent identity verification protocols. These measures aim to prevent unauthorized account takeovers and data breaches that could expose sensitive chat histories or proprietary developer code. By adding these layers, OpenAI is positioning itself as a secure enterprise partner, moving away from its early reputation of rapid, unpolished growth toward a more mature infrastructure.

The rollout is particularly relevant for journalists, activists, and corporate leaders who frequently handle high-value information within AI interfaces. While standard two-factor authentication remains available for all, this opt-in program serves as a hardened shield for those in the crosshairs of sophisticated hacking groups. Moving forward, observers will be watching to see if OpenAI expands these protections to include deeper data encryption or automated threat detection.

This report was originally published by The Verge.

Read the full story at the original source

Now Trending summarizes the news so you can scan in seconds. Full credit and reporting belongs to the original publishers.