• News
  • Subscribe Now

Google Chrome Passwords Alert—Beware The Rise Of The AI Infostealers

By Unknown Author|Source: Forbes|Read Time: 3 mins|Share

1. Hackers have developed a credential-stealing malware that targets the Google Chrome password manager. 2. This malware uses AI jailbreaks to exploit vulnerabilities in Chrome's password manager. 3. Users need to be cautious when storing sensitive information in their Chrome password manager. 4. It is important to regularly update Chrome and use strong, unique passwords to protect against such threats. 5. Stay informed about cybersecurity risks and take necessary precautions to safeguard your online accounts.

Google Chrome Passwords Alert—Beware The Rise Of The AI Infostealers
Representational image

Beware the rise of the AI infostealers. There is, so it seems, quite literally no stopping the rise of infostealer malware. With 2.1 billion credentials compromised by the insidious threat, 85 million newly stolen passwords being used in ongoing attacks, and some tools able to defeat browser security in 10 seconds flat, it’s certainly hard to ignore. But things look set to get worse as new research has revealed how hackers can use a large language model jailbreak technique, something known as an immersive world attack, to get AI to create the infostealer malware for them. Here’s what you need to know.

Introduction to the Threat

A threat intelligence researcher with absolutely no malware coding experience has managed to jailbreak multiple large language models and get the AI to create a fully functional, highly dangerous, password infostealer to compromise sensitive information from the Google Chrome web browser. That is the chilling summary of an introduction to the latest Cato Networks threat intelligence report, published March 18.

The worrying hack managed to get around protections built into large language models that are supposed to provide guardrails against just this kind of malicious behavior by employing something known as the immersive world jailbreak. “Our new LLM jailbreak technique, which we’ve uncovered and called Immersive World,” Vitaly Simonovich, a threat intelligence researcher at Cato Networks, said, “showcases the dangerous potential of creating an infostealer with ease.” And, oh boy, Vitaly is not wrong.

The Immersive World Attack

According to the Cato Networks researchers, an immersive world attack involves the use of what is called “narrative engineering” in order to bypass those aforementioned LLM security guardrails. This requires a highly detailed but totally fictional world to be created by the attacker and roles within it assigned to the LLM to normalize what should be restricted operations.

The researcher in question, the report said, got three different AI tools to play roles within this fictional and immersive world, all with specific tasks and challenges involved. The end result, as highlighted in the Cato Networks report, was malicious code that successfully extracted credentials from the Google Chrome password manager. “This validates both the Immersive World technique and the generated code's functionality,” the researchers said.

Response from Companies

Cato Networks said that it contacted all the AI tools concerned, with DeepSeek being unresponsive while Microsoft and OpenAI acknowledged receipt of the threat disclosure. Google also acknowledged receipt, Cato said, but declined to review the code. I have reached out to Google, Microsoft, OpenAI, and DeepSeek regarding the AI jailbreak report and will update this article if any statements are forthcoming.


By entering your email you agree to our terms & conditions and privacy policy. You will be getting daily AI news in your inbox at 7 am your time to keep you ahead of the curve. Don't worry you can always unsubscribe.