Cyber threats have entered a new phase: hacker groups are being replaced by self-learning algorithms capable of finding vulnerabilities, creating exploits, and adapting to even the most serious defense systems without human involvement.
Over the past couple of years, the first targeted attacks using autonomous malware based on generative AI have been recorded, forcing corporations and governments to urgently restructure their defense strategies. We investigated how these digital predators work, who has already fallen victim to them, and whether it's possible to counter them.
What Is Self-Propagating Malware?
Self-propagating malicious software based on artificial intelligence represents a new class of cyber threats in which the functions of automatic spreading and infecting systems are combined with the cognitive abilities inherent to AI. Unlike traditional viruses that follow rigidly defined scripts, such programs are capable of analyzing their environment, making tactical decisions, and creating or modifying their own code to overcome specific obstacles.
At their core are specialized language models (LLMs) trained on massive datasets of code, technical documentation, and vulnerability descriptions. Such a virus doesn't just execute a script. It also identifies the operating system, installed software, and network policies. Furthermore, the virus generates its payload: on the fly, it writes fragments of code to exploit found weaknesses. If a vulnerability is discovered in a corporate VPN client, the AI module doesn't search for a ready-made exploit in its memory but generates its code from scratch or adapts it for a specific software version, using knowledge of vulnerability types (buffer overflow, SQL injection). It can combine several low-level vulnerabilities to create an attack chain. It also makes decisions; that is, if one path is blocked, it looks for another, mimicking the behavior of an experienced hacker. If the exploit fails (for example, an intrusion prevention system is triggered), the agent doesn't stop its attempts. It analyzes the system's response (error message, connection reset), draws conclusions, and generates a new version of the attack. It can switch from attempting to gain root access through a kernel vulnerability to stealing credentials from a configuration file or to attacking through a neighboring, less protected system on the network. This "action-analysis-correction" cycle mimics the work of a professional hacker but operates much faster. It can create unique phishing emails, voice messages, or deepfake videos, learning and improving each time based on the victim's reaction.
After establishing itself in one system, the agent looks for paths for lateral movement within the network. It analyzes routing tables, searches for shared network resources, and attempts to guess passwords to other machines using stolen hashes or brute-force-generated variants based on corporate password policies. To maintain access, it can autonomously create and mask backdoors, register new tasks in the scheduler, or modify system libraries, with masking methods differing for Windows, Linux, or network equipment.
Real Cases

Last year, many companies encountered this new class of viruses in practice. The biggest players in the cybersecurity market — CrowdStrike, Palo Alto Networks — have published reports documenting the first waves of attacks with elements of autonomous AI.
In the crosshairs, first and foremost, is critical infrastructure (energy, transportation, finance), where many outdated systems still exist, and complex corporate networks with thousands of devices. Everyone is at risk, but the target of such viruses is not your home laptop, but entire ecosystems, and thus the damage is measured in billions.
For instance, fraudsters in China used deepfakes to deceive a state facial recognition system for more than $75 million. To do this, they purchased high-quality photos of strangers and fake personal data on the black market. Then they processed the purchased images in deepfake applications to animate the uploaded picture and turn it into a video. This succeeded in creating the impression that the people in the photos were nodding, blinking, and moving.
A similar story happened in the USA. Back in 2022, fraudsters, using AI models for precise voice imitation of a person, stole about $11 million from victims.
And a few years ago, when such cases were not yet widespread, criminals stole over $240,000 from a British energy company. Using a voice deepfake, the fraudsters, in a conversation with a manager, impersonated the director of the parent company and asked to send a tidy sum to an account. The manager believed. An employee of a bank in the UAE was also deceived. In mid-October 2021, it became known that fraudsters stole about $35 million from a UAE bank. They imitated the voice of the bank's head.
Another known case involved a Chinese fraudster using artificial intelligence to impersonate a businessman's friend and convince him to transfer $610,000. The businessman received a video call from a person who looked and spoke like his close friend. But the caller turned out to be a criminal.
In the summer of 2024, a hacker group used a tool that automatically found and hacked outdated IoT devices (cameras, routers) in the networks of American logistics companies. The virus didn't just scan ports. It analyzed device responses, searched databases (CVE) for information on vulnerabilities of specific models, and generated a unique exploit right on the spot, bypassing the signatures of standard antiviruses. This turned thousands of harmless "smart" devices into a platform for a large-scale DDoS attack that paralyzed the work of several ports.
It is also known that the hacker group Scattered Spider used large language models (LLMs), such as ChatGPT and its analogs (WormGPT, FraudGPT), to create high-quality, personalized phishing emails. The targets were law firms in the USA and UK with access to client funds. Instead of template emails, the AI generated unique texts based on information about the target firm found in open sources (LinkedIn, press releases). The letters were impeccable in terms of grammar, style, and context, which sharply increased the success rate of infections. In fact, specialized AI bots like WormGPT or FraudGPT are actively sold and used on the darknet. They are created based on leaked open-source models and fine-tuned on malicious data. Hackers use them not as autonomous agents, but as "copilots": the bot helps write complex fragments of code for exploits, analyze port scanning results, and suggest possible attack vectors. So, after a regular scanner finds a vulnerable web service in a corporate network, the operator can ask WormGPT to "write a script to exploit an SQL injection in [a specific component]." The AI generates working code, which the hacker then launches.
How Countries Are Creating an "AI Shield": The Digital Arms Race

The response to autonomous threats also falls on AI. Leading countries and blocs are actively developing "AI-powered Cyber Defense" systems.
For example, in the USA, work is underway on the "IQ Shield" project. DARPA and US Cyber Command are testing systems where an AI operator monitors the traffic of government networks 24/7, identifying not just anomalies, but behavioral patterns characteristic of another AI. This is an attempt to predict the actions of an autonomous virus by playing chess with it a move ahead.
Also, the American company SentinelOne is known for its autonomous EDR (Endpoint Detection and Response) platform. Their key development — Purple AI — is an AI assistant for SOC analysts. Purple AI can receive queries in natural language like: "Find all systems where suspicious PowerShell script generation activity has been observed in the last 48 hours and show the attack chains." It automatically correlates events, conducts investigations, and suggests response measures. This is the operational autonomy of defense necessary to counter high-speed AI attacks.
Another American specialized startup, HiddenLayer, focuses exclusively on the security of the machine learning models themselves (MLSec). This is a critically important direction, as an AI attack can be directed not at the network, but at substituting or poisoning the AI model used for defense. The platform continuously monitors the behavior of ML models in production, detecting anomalies in their operation (for example, a sudden drop in accuracy, uncharacteristic inferences), which may be a sign of a model substitution or data poisoning attack from a hostile AI.
The European Union, meanwhile, is developing the "Aegis" initiative. As part of the cybersecurity strategy, a unified platform for exchanging data on AI attacks between member states is being created. The focus is on collective analysis and rapid creation of "digital vaccines" — patches and rules for defense systems that can be deployed across the Union in hours.
The British company Darktrace ActiveAI Security Platform is a pioneer in using AI for cybersecurity. This technology uses unsupervised machine learning algorithms to study the normal behavior of each user and device on the network. The system doesn't look for known signatures but detects micro-anomalies that may indicate the actions of an adaptive AI adversary (for example, an uncharacteristic sequence of data requests, anomalous speed of movement across the network). In 2024, Darktrace announced a module that focuses on detecting attacks using legitimate AI tools (for example, ChatGPT for generating malicious code).
Countries like Israel and Singapore, for instance, are creating isolated digital testing grounds where two AIs — attacking and defending — constantly battle each other, learning and refining tactics so their developments are one step ahead of real threats.
These are no longer just antivirus databases. These are active cyber defense systems where artificial intelligence becomes the main defender of digital borders.
How Companies Can Defend Themselves: Zero Trust and AI Operators

Businesses cannot rely solely on the state shield. The new security paradigm is built on two pillars:
- Zero Trust (or zero trust) as the foundation. Every device, every user, every request for data must undergo strict multi-factor authentication and authorization. Even if an AI virus penetrates the network, the Zero Trust model will limit its movements and access to critical assets, preventing it from achieving its ultimate goal.
- AI security operators (also called Security Copilots). Products like Microsoft Security Copilot or Splunk AI+ are becoming essential for SOC analysts. They aggregate data from all the company's security systems, even explaining to the analyst what exactly is happening, for example: "Abnormal activity has been detected on the accounting server, similar to the actions of an autonomous module." Next, such systems suggest specific neutralization actions and even automatically apply pre-agreed measures. This makes them more effective with each attack.
Thus, defense turns into a continuous collaboration of humans and AI against malicious software.
In recent years, autonomous AI viruses no longer seem distant and fantastical to us. Even now, they make attacks much larger in scale and more dangerous. The only way to counter this is to fight cyber threats with their own weapon. In the new digital reality, the winner will be the one whose artificial intelligence is more resourceful and inventive.
Share this with your friends!
Be the first to comment
Please log in to comment