AI Agents: The New Frontier of Cybersecurity Threats
Concerns about the use of artificial intelligence (AI) in cyberattacks have escalated, with new research highlighting the capacity of AI agents to conduct sophisticated phishing campaigns. The transition from AI assisting in attacks to independently executing them is now a reality, and it poses significant risks to individuals and organizations alike.
“Agents have more functionality and can actually perform tasks such as interacting with web pages. While an agent’s legitimate use case may be the automation of routine tasks, attackers could potentially leverage them to create infrastructure and mount attacks.”
This shift is creating a “nightmare scenario” that the security industry has long anticipated. Recent demonstrations have showcased how AI agents can autonomously hunt for email addresses, craft malicious scripts, and launch targeted phishing attacks. The speed at which AI agents are advancing means that the threat continues to worsen.
The Rise of AI Agents
Previously, AI’s role in cyberattacks was largely passive, focused on generating phishing materials or writing code. Experts predicted the emergence of AI agents, which would actively perform tasks and increase the potential for sophisticated attacks. Now, this prediction has become reality.
Symantec has demonstrated a proof of concept (POC) showing how an AI agent can be deployed to conduct a phishing attack from start to execution. This agent scours the internet and LinkedIn to identify a target’s email address and gathers website data to develop malicious scripts before crafting its own lure to reach its target. Even basic security protocols are proving insufficient to stop these attacks.
J Stephen Kowski of SlashNext noted, “the rise of AI agents like Operator shows the dual nature of technology — tools built for productivity can be weaponized by determined attackers with minimal effort. This research highlights how AI systems can be manipulated through simple prompt engineering to bypass ethical guardrails and execute complex attack chains that gather intelligence, create malicious code, and deliver convincing social engineering lures.”
Bypassing Security Measures
The Symantec POC highlighted how easily these attacks bypass existing security protocols by simply adjusting the AI’s prompt. The AI agent used was from OpenAI; however, the specific developer is not the problem – it is the nature of the technology itself that is concerning.
Andrew Bolster of Black Duck warns that the challenge of constraining Large Language Models (LLMs) will continue to increase as AI-driven tools gain further capabilities. He added, “LLM’s can be ’tricked’ into bad behavior. In fact, one could consider this demonstration as a standard example of social engineering, rather than exploiting a vulnerability. The researchers simply put on a virtual hi-vis jacket and acted to the LLM like they were “supposed” to be there.”
A Growing Threat Landscape
The capabilities of AI agents are still developing; however, it’s expected that these agents will soon become much more powerful. It’s easy to imagine a scenario where an attacker could simply instruct one to ‘breach Acme Corp’ and the agent will determine the optimal steps before carrying them out. This threat has been exacerbated by reports of “Microsoft Copilot Spoofing” indicating how susceptible users are to these new methods of attack.
A separate report has also revealed how cybercriminals may misuse AI tools like OpenAI’s ChatGPT and Google’s Gemini, and how mainstream GenAI guardrails can be bypassed. Further, with open-source releases such as DeepSeek’s local LLMs, the threat landscape is changing.
The team at Tenable has used DeepSeek to generate malicious code, including a keylogger and ransomware. The keylogger can now hide an encrypted log file on disk and has the capability of bypassing Windows Task Manager and implementing encryption to prevent detection. The ransomware executable can also be created with a small amount of prompting.
The Path Forward
Guy Feinberg from Oasis Security notes that “Manipulation Is inevitable. Just as we can’t prevent attackers from tricking people, we can’t stop them from manipulating AI agents. The key is limiting what these agents can do without oversight. AI agents need identity governance. They must be managed like human identities, with least privilege access, monitoring, and clear policies to prevent abuse. Security teams need visibility.”
Experts suggest that organizations must adopt robust security controls that assume AI will be used against them. The best approach involves combining advanced threat detection technologies with proactive security measures that limit the information accessible to potential attackers.
As Dick O’Brien from Symantec said, “we are not yet ready for this.”