Microsoft Fights Cyber Threats with AI
Microsoft is entering the fight against rising cyberattacks with the deployment of 11 new AI cybersecurity “agents.” These agents are designed to proactively identify and filter suspicious emails, block potential hacking attempts, and gather intelligence on the origin of attacks.
According to Tom Clarke, Science and technology editor, cyberattacks have reached “unprecedented complexity,” necessitating the use of artificial intelligence to stay ahead of the threats.

“Last year, we tracked 30 billion phishing emails,” said Vasu Jakkal, vice president of security at Microsoft. “There’s no way any human can keep up with the volume.”
With a significant portion of the world’s computers running Windows software and many businesses depending on Microsoft’s cloud computing infrastructure, the company has long been a prime target for hackers.
How the AI Agents Work
Unlike AI assistants that respond to queries or schedule appointments, AI agents are computer programs that autonomously interact with their environment to perform tasks without direct user input.
The deployment of these “agentic” AI models is a response to the growing sophistication of cyberattacks. The rise of readily available malware on the dark web, coupled with the potential for AI to write new malware and automate attacks, has created what Jakkal describes as a “gig economy” for cybercriminals.
“We are facing unprecedented complexity when it comes to the threat landscape,” Jakkal stated.
Microsoft’s AI agents, some developed internally and others by external partners, will be integrated into Microsoft’s Copilot suite of AI tools. They will primarily serve the IT and cybersecurity teams of its customers rather than individual Windows users.
Concerns and Future Outlook
While Microsoft advances with its AI-powered cybersecurity, concerns exist within the field. Meredith Whittaker, CEO of the messaging app Signal, expressed worries about the potential for AI agents to access and analyze user data. “Whether you call it an agent, whether you call it a bot, whether you call it something else, it can only know what’s in the data it has access to, which means there is a hunger for your private data and there’s a real temptation to do privacy invading forms of AI.”
Microsoft has stated that its cybersecurity agents are designed with specific roles, allowing them access only to data relevant to their tasks. The company also employs a “zero trust framework” to continuously evaluate the AI agents’ adherence to their programmed rules. The roll-out of this new AI cybersecurity software by a company as dominant as Microsoft will be closely observed.
