Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Google detected novel adaptive malware in the wild.
- This new malware uses LLMs to dynamically generate code.
- Google also listed other new key trends in cyberattacks.
The use of artificial intelligence (AI) in cyberattacks has entered a new phase: the development of novel malware actively used in the wild.
Also: Is spyware hiding on your phone? How to find and remove it – fast
It was only a month or so ago when OpenAI published a report on how AI is being used by threat actors, outlining key trends including malicious workflow efficiency, phishing, and surveillance. OpenAI — the developer behind ChatGPT — said at the time that there was no evidence that existing AI models were being used in novel attacks, but according to an update from Google’s Threat Intelligence Group (GTIG), AI is being weaponized to develop adaptive malware.
Novel AI malware appears
The update, published on November 5, outlines how AI and large language models (LLMs) are being utilized in new ways to refine malware and create entirely new families.
Also: Phishing training doesn’t stop your employees from clicking scam links – here’s why
A number of malware strains have been detected in the wild that use AI to dynamically generate malicious scripts, create prompts for data theft, obfuscate code, evade detection, and alter malware behavior during the attack phase.
Google outlined novel AI features in the following strains of malware:
- FRUITSHELL: A publicly available reverse shell containing hard-coded prompts meant to bypass detection or analysis by LLM-powered security systems.
- PROMPTFLUX: Experimental malware, a VBScript dropper with obfuscation, that abuses the Google Gemini API to dynamically rewrite its own source code.
- PROMPTLOCK: Another experimental strain of malware, a Go-based ransomware variant, that leverages an LLM to dynamically generate and execute malicious scripts.
- PROMPTSTEAL: An active Python data miner utilizing AI to generate data-stealing prompts.
- QUIETVAULT: An active JavaScript credential stealer that targets GitHub and NPM tokens. An AI prompt and on-host installed AI tools are also used to search for additional secrets on infected systems.
“This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution,” Google researchers say.
Google says that while some of these malicious projects appear to be experimental, they highlight a shift away from using AI and LLMs purely for phishing or technical code improvements through what is known as “vibe coding,” the practice of using AI to generate code based on a concept or idea.
The researchers expect to see more use of AI in specific malicious functions in the future.
Other key trends
Google’s report explored several other key trends in the world of AI cyberattacks. The first is the increasing adoption of “social engineering-like pretexts” in prompts to bypass AI safety guardrails. For example, prompts have been used to try to lure Gemini into providing data that is usually restricted to the general public. In some cases, threat actors will pose as cybersecurity researchers or students participating in capture-the-flag competitions.
Also: Is that an AI video? 6 telltale signs it’s a fake
Another key trend, and one also noted by OpenAI researchers, is the abuse of AI models to refine existing malicious programs and infrastructure. Google says that state-sponsored groups from countries including North Korea, Iran, and China are utilizing AI to enhance reconnaissance, phishing, and command-and-control (C2) centers.
There are also notable shifts taking place in the cybercriminal underground. AI-enabled tools and services are beginning to emerge in underground forums, including deepfake and malware generators, phishing kits, reconnaissance tools, vulnerability exploits, and technical support.
Also: Gartner just dropped its 2026 tech trends – and it’s not all AI: Here’s the list
“This evolution underscores how AI makes modern malware more effective. Attackers are now using AI to generate smarter code for data extraction, session hijacking, and credential theft, giving them faster access to identity providers and SaaS platforms where critical data and workflows live,” commented Cory Michal, CSO at AppOmni. “AI doesn’t just make phishing emails more convincing; it makes intrusion, privilege abuse, and session theft more adaptive and scalable. The result is a new generation of AI-augmented attacks that directly threaten the core of enterprise SaaS operations, data integrity, and extortion resilience.”

