Microsoft is treating AI agents like any other employee when it comes to security, applying zero trust principles by extending its security and identity tools to cover AI agents. The tech giant announced at its Build conference in Seattle that it will be expanding its Entra, Purview, and Defender tools to include AI agents developed using Microsoft’s own tools and those from key partners.
Zero Trust for AI Agents
The zero trust approach means AI agents can’t be trusted by default and need their own secure identification. To achieve this, Microsoft has unveiled Microsoft Entra Agent ID, a system for managing and securing agentic AI. This system automatically assigns identities to AI agents created within Microsoft Copilot Studio and Azure AI Foundry, centralizing agent and user management in one solution.
Enhanced Security Features
Microsoft is also extending its Purview data security and compliance controls to AI agents built within Azure AI Foundry and Copilot Studio, as well as custom-built AI applications via a new software development kit (SDK). This allows developers to reduce the risk of their AI applications oversharing or leaking data and supports compliance efforts. Security teams gain visibility into AI risks and mitigations through this integration.
Integration with Existing Tools
The company is adding security tools from Defender directly into Azure AI Foundry, reducing the tooling gap between security and development teams. This enables development teams to proactively mitigate AI application risks and potential vulnerabilities. The system will work with ServiceNow and Workday, integrating into their agent platforms and providing automated provisioning of identities.
Industry Context
Agentic AI is the latest trend in big tech, with industry leaders suggesting it marks the next step in the evolution of generative AI. However, concerns over security have come to the forefront as the industry pivots to this technology. Microsoft’s move to enhance security for AI agents comes as the company expects 1.3 billion AI agents to be in operation by 2028.
Conclusion
Microsoft’s announcements underscore its commitment to providing comprehensive security and governance for AI, built on the security lessons of the past and in line with its Secure Future Initiative principles. The new security features look to further bolster protection for enterprises dabbling in agentic AI technology.