OpenAI is treating account protection as part of the AI product itself
OpenAI has launched Advanced Account Security, a new opt-in setting for ChatGPT accounts designed for people who face elevated digital risk and for anyone who wants stronger protections than the default account setup. The company says the feature also protects Codex accounts accessed through the same login.
The move is notable not because security options are new, but because OpenAI is packaging a stricter model of identity protection into a single mode and tying it directly to the idea that AI accounts now contain increasingly sensitive personal and professional material. As people use chatbot systems for higher-stakes work, account takeover stops being a generic consumer-security problem and becomes a gateway into data, workflows, and context accumulated over time.
What the new security mode changes
Advanced Account Security is available from the Security section of ChatGPT accounts on the web. Once enabled, it requires passkeys or physical security keys and disables password-based login. That is a strong shift toward phishing-resistant authentication, particularly for users who are more likely to be targeted by account theft or social engineering.
The mode also changes recovery. Instead of allowing account recovery through email or SMS, OpenAI requires stronger methods such as backup passkeys, security keys, and recovery keys. The company makes clear that this comes with tradeoffs: users who enroll take on more responsibility for their own recovery, and OpenAI Support will not be able to help restore access if those stronger recovery methods are lost.
That is a meaningful design choice. OpenAI is prioritizing resistance to takeover over convenience in recovery, which is often the right trade for users facing sophisticated threats.
Why the launch matters now
OpenAI explicitly frames the feature around people at increased risk, including journalists, political dissidents, elected officials, and researchers. Those groups are not hypothetical edge cases. They are among the users most likely to store sensitive material, conduct consequential work, or attract targeted intrusion attempts.
But the company also opens the setting to anyone. That broad availability matters because the threat model for AI accounts is expanding beyond activists and public figures. A heavily used chatbot account may hold business plans, private health questions, code, legal drafting, strategic notes, or access to connected tools. The value of compromising such an account rises with every feature AI platforms add.
OpenAI’s underlying message is that protecting these accounts should no longer be treated as an advanced niche practice. It is becoming part of basic platform design.
Security posture as product differentiation
The launch also reflects growing competition around trust in the AI sector. Companies are racing not only on model capability, but on the credibility of the environments in which those models are used. If users increasingly place sensitive material inside AI systems, providers will be judged on how well they secure access to those systems.
By bundling heightened controls into a named mode, OpenAI is making security legible to users who may not want to configure individual settings one by one. That can improve adoption, especially among people who understand the risk but are not security experts.
It also helps OpenAI align its public product posture with a broader cybersecurity agenda. The company describes Advanced Account Security as part of a larger action plan to expand access to protective technologies for communities, critical systems, and national security.
The tradeoff is intentional friction
The most important detail may be the one that creates the most inconvenience: recovery gets harder. Many platforms weaken their own security by leaving softer fallback channels in place, allowing attackers to bypass strong login protections through compromised email accounts, SMS interception, or support manipulation.
OpenAI is attempting to close that gap. If email and SMS recovery are disabled, then an attacker who compromises those channels has fewer options. The cost is that legitimate users must manage backup credentials carefully. For high-risk users, that is usually the correct trade. For casual users, it will depend on how much inconvenience they are willing to accept in exchange for stronger protection.
What the rollout signals
Advanced Account Security does not solve every AI security issue. It does not govern what users paste into models, how connected apps handle data, or how organizations manage broader access controls. But it does address a foundational problem: whether the account itself can be taken over through common attack paths.
That matters because identity is the front door to everything else. Once an attacker gets in, the distinction between chatbot, work assistant, code environment, and knowledge store starts to collapse.
OpenAI’s new mode acknowledges that reality. The company is effectively saying that in the AI era, account security is no longer peripheral infrastructure. It is part of the product’s safety boundary.
For users who treat ChatGPT or Codex as a serious workspace, the new option is less a premium extra than a sign of where platform security now has to go.
This article is based on reporting by OpenAI. Read the original article.
Originally published on openai.com








