Microsoft is actively assisting its customers in navigating the evolving landscape of artificial intelligence, particularly in light of the European Union’s AI Act, set to introduce key obligations in 2025. The company, having hosted its AI Tour in Brussels, Paris, and Berlin, witnessed firsthand the enthusiasm of European organizations eager to leverage the latest AI technologies, even as they prepared for this new regulatory framework. Microsoft’s commitment lies in enabling customers to both innovate using AI and comply with the EU AI Act.
Navigating the EU AI Act
Microsoft is developing its products and services to meet its obligations under the EU AI Act. They are concurrently assisting customers integrate and use these technologies in compliance with the new regulations. Furthermore, Microsoft is collaborating with European policymakers. Their goal is to help develop efficient, effective implementation practices under the EU AI Act. These practices aim to align with emerging international standards.
As the dates for compliance with the EU AI Act are staggered and key implementation details are yet to be finalized, Microsoft is offering information and tools on an ongoing basis. Customers can consult the EU AI Act documentation available on the Microsoft Trust Center for the latest updates.
Building Compliant Products and Services
Worldwide, organizations rely on Microsoft’s products and services for innovative AI solutions. For global businesses, regulatory compliance is a priority. In every customer agreement, Microsoft commits to adhering to all applicable laws and regulations, including the EU AI Act. This commitment extends to the company’s dedicated investments in its AI governance program. Microsoft’s inaugural Transparency Report details the risk management approach adopted across the entire AI development lifecycle. This includes practices such as impact assessments and red-teaming. These evaluations identify potential risks, and the Sensitive Uses program ensures additional oversight for teams building high-risk systems.
Risk management involves systematic measurement to evaluate risk prevalence and severity against defined metrics. Mitigation strategies include content classifiers in Azure AI Content Safety, continuous monitoring, and incident response. The Responsible AI Standard, guides Microsoft’s engineering teams in building AI solutions. The standard was drafted with an early version of the EU AI Act in mind.
Cross-functional teams, consisting of experts in AI governance, engineering, legal, and public policy, are focused on aligning internal standards and practices with the final EU AI Act text and early implementation details. These groups are also identifying any additional engineering work needed to stay ready. For example, the prohibited practices provisions of the EU AI Act are among the first to come into effect, starting in February 2025.
A proactive, layered approach to compliance is in action. This includes:
- Conducting a comprehensive review of existing Microsoft-owned systems.
- Creating internal policy restrictions preventing Microsoft from designing or deploying AI systems for uses prohibited by the EU AI Act.
- Developing marketing and sales guidelines to avoid promoting general-purpose AI technologies for prohibited uses.
- Updating contracts, including the Generative AI Code of Conduct, to ensure compliance with regulations.
Microsoft was also among the first organizations to support the three core commitments in the AI Pact, voluntary pledges designed by the AI Office to facilitate preparation for certain EU AI Act compliance deadlines. Apart from the annual Responsible AI Transparency Reports, information about Microsoft’s approach to the EU AI Act and details of how the prohibited practices provisions are being implemented can be found on the Microsoft Trust Center.
Supporting Customer Compliance
The EU AI Act emphasizes shared responsibility throughout the AI supply chain. As an upstream provider of AI tools and services, Microsoft intends to support its customers. This is to help them integrate and use these offerings in high-risk AI systems. Microsoft offers knowledge, documentation, and tools to assist customers with their AI development and deployment. These efforts are underpinned by the AI Customer Commitments made in June of the previous year.
Documentation related to the EU AI Act will be published continuously on the Microsoft Trust Center. The Responsible AI Resources site offers tools, practices, templates, and other information to help customers establish good governance for EU AI Act compliance. Since 2019, the 33 Transparency Notes have provided crucial insights into the capabilities and limitations of AI tools, components, services. Documentation for AI systems like the Azure OpenAI Service Transparency Note and the FAQ for Copilot is also provided.
Additional guidance on documentation and transparency is expected from secondary regulatory efforts under the EU AI Act. The Reporting Framework for the Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems will influence these documentation and transparency norms. Microsoft has contributed to this Reporting Framework via a process convened by the OECD.
Microsoft makes tools available to customers that are also used internally. Notable tools include Microsoft Purview Compliance Manager, which helps customers improve compliance. Other offerings include:
- Azure AI Content Safety to mitigate content-based risks.
- Azure AI Foundry for evaluating generative AI applications.
- Python Risk Identification Tool (PyRIT) for identifying potential risks.
Developing Effective Implementation Practices
The EU AI Act includes more than 60 secondary regulatory efforts that will determine implementation expectations. Microsoft is working with the central EU regulator, the AI Office, and relevant authorities to provide insights from its development, governance, and compliance experience. They are seeking clarity on open questions and advocating for practical outcomes. The company is directly participating in developing the Code of Practice for general-purpose AI model providers, and is also contributing to technical standards. These standards are being developed by organizations such as CEN and CENELEC to address high-risk AI system requirements within the EU AI Act.
Customers have an important role in implementation through engagement with policymakers and industry groups. Through this engagement, customers can inform implementation practices to better reflect their needs. Going forward Microsoft will continue to make significant product, tooling, and governance investments to help its customers innovate with AI in line with new laws like the EU AI Act. Implementation practices that are efficient, effective, should be interoperable internationally. These practices are key to supporting useful and trustworthy innovation on a global scale. Finally, Microsoft welcomes feedback on continued support for customers as they work to comply with laws like the EU AI Act.