By People's Voice Editorial·Breaking News Analysis·May 1, 2026 at 4:26 PM

OpenAI Adds Phishing Resistant Security for ChatGPT Accounts

1321 words6 min read
OpenAI Adds Phishing Resistant Security for ChatGPT Accounts
Photo by Tony Webster, via Wikimedia Commons (CC BY 2.0)

OpenAI Adds Phishing Resistant Security for ChatGPT Accounts

San Francisco, California - OpenAI introduced an opt-in security mode for ChatGPT and Codex accounts that replaces password login and email or SMS recovery with passkeys, physical security keys, recovery keys, shorter sessions, and login alerts, according to an April 30 company announcement.

The feature, called Advanced Account Security, is aimed at users whose AI accounts may now contain sensitive personal context, professional material, connected workflow access, or codebase information. OpenAI said the setting is designed for people at increased risk of digital attacks, including journalists, elected officials, political dissidents, researchers, and security-conscious users.

What Happened

OpenAI said the setting is available in the Security section of ChatGPT accounts on web. Once a user enrolls, the protections apply to ChatGPT and Codex accounts accessed through the same login.

The main change is at sign-in. Advanced Account Security requires passkeys or physical security keys and disables password-based login, according to OpenAI. The company said the goal is to make phishing-resistant sign-in the default for users who need stronger protection.

"Advanced Account Security requires passkeys or physical security keys while disabling password-based login, helping make phishing-resistant sign-in the default for people who need it most."

OpenAI, April 30, 2026

A FIDO2 USB security token shows the kind of hardware-backed authentication OpenAI is promoting for high-risk accounts. Photo by Yubinerd123, via Wikimedia Commons (CC BY-SA 4.0).
A FIDO2 USB security token shows the kind of hardware-backed authentication OpenAI is promoting for high-risk accounts. Photo by Yubinerd123, via Wikimedia Commons (CC BY-SA 4.0).

OpenAI also removed two common recovery routes for enrolled accounts. The company said Advanced Account Security disables email and SMS recovery because a compromised mailbox or phone number can give an attacker a path back into a protected account.

Instead, enrolled users must rely on backup passkeys, security keys, and recovery keys. That raises the security bar, but it also creates a hard usability tradeoff: OpenAI said its support team will not be able to recover accounts for users who lose access to those stronger recovery methods.

"Because account recovery is restricted to these more secure methods, OpenAI Support will not be able to assist with account recovery for users enrolled in Advanced Account Security."

OpenAI, April 30, 2026

The feature also shortens signed-in sessions, sends login alerts, and lets users review active sessions across devices, according to OpenAI. Those controls do not stop every attack, but they reduce the time an attacker can use an active session and give users a better chance to notice suspicious access.

Why The Mechanism Matters

The technical shift is from reusable secrets to phishing-resistant authentication. A password, SMS code, or email recovery link can be stolen from a fake login page, intercepted through a phone-number attack, or reused after a breach. FIDO passkeys and physical security keys use public-key cryptography tied to the real service, so a fake site cannot simply capture a credential and replay it against the legitimate login.

NIST's digital identity guidance gives that choice a government standards context. NIST Special Publication 800-63B says stronger authentication assurance levels require stronger resistance to credential theft, and the current 800-63-4 draft says Authentication Assurance Level 3 requires a phishing-resistant authenticator with a non-exportable authentication key.

"AAL3 authentication requires a phishing-resistant authenticator with a non-exportable authentication key."

NIST Special Publication 800-63B

CISA's public guidance gives the simpler consumer framing. The agency says multifactor authentication is a layered approach to securing online accounts and the data they contain, and it says users with MFA are less likely to be hacked because a stolen password alone does not satisfy the second authentication requirement.

OpenAI's version goes beyond basic MFA by removing password login and weaker recovery channels for enrolled accounts. That matters because account recovery is often the back door in account takeover cases: if an attacker controls a user's email account or phone number, a stronger login prompt can be bypassed through reset flows.

The Response

OpenAI framed the rollout as a privacy and account-protection control for users whose ChatGPT histories and Codex access have become more sensitive. The company said ChatGPT accounts can hold personal and professional context as AI tools become more connected to work systems and developer workflows.

The company also said conversations from enrolled accounts will be automatically excluded from model training. That privacy setting is separate from login security, but OpenAI bundled it into the same high-risk account mode for users handling especially sensitive information.

"With Advanced Account Security enabled, that preference is automatic: conversations from those accounts will not be used to train our models."

OpenAI, April 30, 2026

Yubico, which partnered with OpenAI on a custom security-key bundle, presented the rollout as a push to bring hardware-backed passkeys to AI users. The company said the two-pack includes a YubiKey C Nano for daily laptop use and a YubiKey C NFC for mobile and backup authentication.

Dane Stuckey, OpenAI's chief information security officer, said in Yubico's April 30 release that security keys are among the best defenses against phishing and that OpenAI already uses YubiKeys internally to protect employees.

"Security keys are one of the best ways to protect accounts from phishing, and Yubico has played a leading role in making that protection practical and accessible."

Dane Stuckey, OpenAI chief information security officer, in Yubico's April 30, 2026 release

Who Has To Use It

For most users, Advanced Account Security is optional. OpenAI said users can use the custom Yubico bundle, other FIDO-compliant security keys, or software-based passkeys.

One group faces a deadline. OpenAI said individual members of its Trusted Access for Cyber program who use its most cyber-capable and permissive models must enable Advanced Account Security beginning June 1, 2026. Organizations with trusted access can instead attest that phishing-resistant authentication is part of their single sign-on workflow, according to OpenAI.

That requirement shows why the feature is more than a consumer-account setting. OpenAI's Trusted Access for Cyber program gives verified defenders access to more capable cyber models. If those accounts are compromised, the risk is not only private chat history; it can include access to sensitive defensive workflows, code, and security research.

OpenAI's 2025 logo identifies the company behind Advanced Account Security for ChatGPT and Codex accounts. OpenAI logo, via Wikimedia Commons (public domain).
OpenAI's 2025 logo identifies the company behind Advanced Account Security for ChatGPT and Codex accounts. OpenAI logo, via Wikimedia Commons (public domain).

What People Are Saying

"Today, we're introducing Advanced Account Security, a new opt-in setting for ChatGPT accounts, designed for people at increased risk of digital attacks, as well as for those who want the strongest account protections available."

OpenAI, April 30, 2026

"Advanced Account Security disables email and SMS recovery and requires stronger recovery methods: backup passkeys, security keys, and recovery keys."

OpenAI, April 30, 2026

"MFA is a layered approach to securing your online accounts and the data they contain."

Cybersecurity and Infrastructure Security Agency

"This partnership with OpenAI delivers the highest level of protection against phishing with a low friction user experience."

Jerrod Chong, chief executive officer, Yubico

By The Numbers

OpenAI announced Advanced Account Security on April 30, 2026.

The Trusted Access for Cyber requirement begins June 1, 2026, according to OpenAI.

Yubico said the custom OpenAI bundle includes two keys: a YubiKey C Nano and a YubiKey C NFC.

NIST's 800-63B guidance uses three authentication assurance levels, with AAL3 requiring phishing-resistant authentication with a non-exportable key.

The Big Picture

Advanced Account Security treats AI accounts less like ordinary consumer logins and more like infrastructure accounts. That reflects how ChatGPT and Codex are being used: not only for casual questions, but for professional work, sensitive research, political activity, and software development.

The tradeoff is clear. Users who enroll get stronger protection against phishing, mailbox compromise, SMS recovery abuse, and stolen sessions. They also accept stricter recovery responsibility, since OpenAI says support cannot restore access if the enrolled recovery methods are lost.

The next test is adoption. OpenAI can require the setting for high-risk cyber-defender access, but ordinary users must decide whether the security gain is worth the added recovery burden.