Navigating the EU AI Act: Compliance Challenges and Strategies with David Dumont
In a recent interview, David Dumont, Partner at Hunton Andrews Kurth, discussed the implications of the EU AI Act. Dumont outlined strategies for compliance and risk mitigation.

Aligning with GDPR
The impact of the AI Act is often compared to that of the GDPR. Dumont suggests that businesses can leverage existing GDPR frameworks while addressing new obligations.
We have identified a significant number of areas where organizations will be able to leverage their current GDPR experience and compliance efforts for their EU AI Act compliance journey.”,
Like the GDPR, the AI Act establishes accountability, data quality and management, governance, vendor diligence, training, risk assessment, and transparency requirements concerning the development and usage of AI systems. Comprehensive GDPR compliance programs can be adapted to fulfill these new obligations.
New Elements for Compliance
However, Dumont emphasizes key elements of AI Act compliance programs that necessitate a fresh approach. Conformity assessment obligations, for example, do not overlap with GDPR and hence, may be entirely new to a business.
National-Level Enforcement
The AI Act harmonizes enforcement powers for national supervisory authorities, including significant administrative fines. But, it also empowers EU Member States to establish national enforcement rules. This could include criminal liability for AI misuse.
Organizations must monitor legal developments in the EU, anticipating potential local deviations. Dumont notes that minimizing additional local enforcement rules is crucial to avoid a fragmented EU legal framework around AI, which could hinder innovation.
Clarifications from Regulators
As a pioneering legal framework, the EU AI Act introduces new legal concepts. Dumont anticipates that the European Commission and industry bodies will provide further guidance. Article 96 of the AI Act outlines areas where the European Commission must develop guidelines.
Further guidance and practical application will be required to fully understand how these concepts should be interpreted in practice.
Guidance is forthcoming on high-risk AI systems, transparency requirements, substantial modifications, and the interplay between the AI Act and EU product safety legislation. Codes of practice and standards will further inform practical implementation.
Addressing Transparency Challenges and Intellectual Property
The AI Act mandates transparency, especially for high-risk AI systems. But this requirement conflicts with the need to protect trade secrets and intellectual property. The EU legislator acknowledges this tension in the AI Act.
Transparency obligations are without prejudice to intellectual property rights. Dumont notes that all parties must find the right balance between ensuring safe, reliable, and trustworthy AI, and protecting the interests of providers that invest in AI development.
Mitigating Risks with Third-Party AI Vendors
When companies utilize third-party AI vendors, in-house lawyers must assess and mitigate the risks involved.
It is important for in-house lawyers to conduct appropriate diligence before third party AI systems are rolled out within the organization.”,
The AI Act imposes several obligations on AI system vendors that will aid this process. Vendors of high-risk AI systems must provide sufficient documentation to enable deployers to understand the system’s operation. They must also draft detailed technical instructions. Companies should update vendor screening procedures, utilizing questionnaires to gather information and assess vendors’ AI compliance maturity.