The Rise of Autonomous AI Attacks: A New Cybersecurity Threat
We’ve been warned for some time: Artificial intelligence (AI) will dramatically escalate the threat of cyberattacks. This year, that nightmare scenario is becoming a frightening reality as AI tools are deployed to carry out attacks with frightening sophistication.
Recent analysis on advanced threats from semi-autonomous AI attacks and a new GenAI attack warning highlights this rapidly evolving threat.
The Dawn of AI Agents
Symantec’s new analysis demonstrates how an AI agent can be used to conduct phishing attacks. The report describes how agents possess “more functionality and can actually perform tasks such as interacting with web pages.” While the intention may be to automate routine tasks, attackers are leveraging these agents to build infrastructure and launch attacks. The security team has warned of this before, that “while existing Large Language Model (LLM) AIs are already being put to use by attackers, they are largely passive and could only assist in performing tasks such as creating phishing materials or even writing code. At the time, we predicted that agents would eventually be added to LLM AIs and that they would become more powerful as a result, increasing the potential risk.” Now there’s a proof of concept.
“Our goal was to see if an agent could carry out an an attack end to end with no intervention from us other than the initial prompt,” says Symantec’s Dick O’Brien. The idea of an AI agent hunting the internet and LinkedIn to find a target’s email address, then using this information to craft its own malicious scripts and launch phishing campaigns should be alarming.
J. Stephen Kowski of SlashNext, notes the duality of new tech advancements: “The rise of AI agents…shows the dual nature of technology—tools built for productivity can be weaponized by determined attackers with minimal effort.”
Bypassing Security Measures
Even the built-in security measures are easily bypassed, Symantec’s proof of concept (POC) revealed. When the “Operator” (the AI agent) initially flagged the attempt saying it involved sending unsolicited emails, this was easily overcome. By slightly modifying the instructions to state the target had authorized the emails, the AI circumvented the restriction and began performing the assigned tasks.
Andrew Bolster from Black Duck warned of the challenges in ‘constraining’ LLMs. “Examples like this demonstrate the trust-gap in underlying LLMs guardrails that supposedly prevent ‘bad’ behavior…LLM’s can be ’tricked’ into bad behavior.”
Symantec further warns that “the technology is still in its infancy, and the malicious tasks it can perform are still relatively straightforward compared to what may be done by a skilled attacker. However, the pace of advancements in this field means it may not be long before agents become a lot more powerful. It is easy to imagine a scenario where an attacker could simply instruct one to ‘breach Acme Corp’ and the agent will determine the optimal steps before carrying them out.”
The Malware Generation Threat
Beyond the phishing threat, there is rise in malicious code generation. Recent reports show AI tools, like DeepSeek, are being used to generate malicious code. Tenable’s research team notes that cybercriminals are actively exploring LLMs such as OpenAI’s ChatGPT and Google’s Gemini. Even guardrails may not be enough, as Symantec’s POC demonstrated in its analysis highlighting how easily some of these mainstream GenAI guardrails can be bypassed.
DeepSeek was used “to create a keylogger that could hide an encrypted log file on disk as well as develop a simple ransomware executable.”
The Need for Robust Security Controls
Security experts agree that the key to defending against these increasingly sophisticated AI attacks is to redefine security to treat AI like people. “Organizations need to implement robust security controls that assume AI will be used against them,” says Kowski. “The best defense combines advanced threat detection technologies that can identify behavioral anomalies with proactive security measures that limit what information is accessible to potential attackers in the first place.”
Guy Feinberg from Oasis Security adds, “AI agents, like human employees, can be manipulated. Just as attackers use social engineering to trick people, they can prompt AI agents into taking malicious actions.” The key is to assign permissions to AI, just as you would with people, treat them the same, and that included identity-based governance and security. And that included identity-based governance and security, and an assumption that AI will be tricked into making mistakes.
As this threat landscape continues to evolve, continuous vigilance is crucial. We are not yet ready for this.