Great learning starts with the right support, available around the clock.

Training Deals Newsletter - Atlas Chatgpt
29 Oct

2025

James Smith

ChatGPT Atlas Browser Exploit Exposes Hidden Commands

Cybersecurity researchers at LayerX discovered a critical flaw in OpenAI’s new ChatGPT Atlas browser that lets attackers inject malicious commands into the AI’s persistent memory. The flaw, dubbed “ChatGPT Tainted Memories,” exploits a Cross-Site Request Forgery (CSRF) pattern to embed hidden instructions within a user’s ChatGPT account memory. Because these instructions persist in memory, they can reappear across sessions and devices, posing a serious security risk. 

Researchers warn that the exploit could allow attackers to manipulate the AI’s responses, extract sensitive information, or even trigger unauthorised actions during normal interactions. LayerX has responsibly disclosed the vulnerability to OpenAI, urging swift remediation. Security experts advise users to remain cautious, avoid interacting with untrusted links, and stay updated on forthcoming patches and security advisories from OpenAI. 

How the Attack Plays Out? 

Below is a step-by-step breakdown of how the Tainted Memories exploit works, from initial login to persistent cross-device impact. 


 

1) Logged-in User with Active Session 

The attack begins when the victim is already logged into ChatGPT Atlas and has a valid authentication cookie or token stored in their browser. This active session becomes the gateway for the attacker’s next move. 

2) Visiting a Compromised Web Page 

The attacker tricks the user into visiting a malicious or compromised website through a phishing link, a fake pop-up, or embedded content. The page appears normal, but is designed to silently target ChatGPT’s active session. 

3) Hidden CSRF Request Execution 

Once the page loads, it executes a Cross-Site Request Forgery (CSRF) request in the background. This request piggybacks on the user’s existing ChatGPT credentials, making it appear legitimate to OpenAI’s systems. 

4) Injection Into ChatGPT’s Memory 

The forged request injects hidden instructions into ChatGPT’s persistent Memory. These instructions remain invisible to the user but are stored in the account’s data, effectively planting the malicious payload. 

5) Triggering the “Tainted Memories” 

During future interactions, when the user queries ChatGPT, these “tainted memories” are automatically recalled. The AI then executes the attacker’s hidden instructions, which can include data extraction, response manipulation, or unauthorised actions. 

6) Persistent and Cross-Device Impact 

Because the injected instructions are stored in the account’s memory, they resurface across sessions, browsers, and devices. This persistence makes the exploit particularly dangerous and hard to detect, as the malicious behaviour follows the user wherever they log in. 

Conclusion

The ChatGPT Tainted Memories exploit highlights the hidden risks of integrating AI-driven memory into browsers. Because the injected commands persist across sessions, the attack remains active even after the user switches devices, making it both stealthy and dangerous.   

As ChatGPT Atlas is still a new browser, users should exercise caution when sharing critical or personal information. Stay tuned for official updates and further security guidance in our upcoming coverage. In the meantime, consider using alternative browsers such as Chrome, Firefox, Edge, or Safari until a verified fix is released. 

+
certificate

Training Deals- Get a Quote

red-star Who Will Be Funding The Course?

red-star
red-star
+44
red-star

Preferred Contact Method