The EU AI Act, hailed as a landmark in artificial intelligence regulation, is facing significant challenges as tech giants intensify their lobbying efforts.
Laura Caroli, a senior fellow at the Wadhwani AI Center, emphasized the critical moment, stating, “The real fight is happening right now,” highlighting that the details of implementation are as important as the general principles. The focus is now on the Code of Practice for general-purpose AI models, which will define the compliance requirements for AI companies.
The code, originally scheduled for its third and final draft release on February 17th, has been delayed. Risto Uuk, head of EU policy and research at the Future of Life Institute, told PYMNTS that the delay is likely due to pressure from the tech industry. Voluntary requirements are slated to take effect in August. The advocacy group is led by MIT professor Max Tegmark.

Uuk expressed concern that this delay signals potential weakening of the safety provisions, owing to opposition from major tech companies. Particularly contentious are the regulations for AI models that pose a systemic risk, impacting the largest models from companies such as OpenAI, Google, Meta, Anthropic, and xAI.
Reports indicate that some tech companies, like Meta, are actively lobbying to soften the AI Act’s requirements. Meta reportedly declined to sign the voluntary code of practice, while Google expressed concerns that the code was a “step in the wrong direction.” Tech companies reportedly also bristle at the inclusion of how copyrighted material is used for training and the use of independent third-party risk assessments.
Uuk pointed out that many companies already implement these practices themselves, often in conjunction with organizations like the U.K. AI Safety Institute, and they already publicly release technical reports. The new European Commission administration, which took office last December, also leans towards cutting red tape to spur innovation.
The Future of Life Institute gained prominence in the AI field following its 2023 open letter, which called for a six-month moratorium on advanced AI models to develop safety protocols. Though endorsed by figures like Elon Musk and Steve Wozniak, Uuk noted that the expected pause in AI development never materialized. “Many of these [AI] companies have not increased their safety work, which the ‘pause’ letter called for,” he added. Similarly, in May 2024, OpenAI’s AI safety team was dissolved, following high-profile resignations from its safety leadership, who cited a lack of prioritization given to safety by the company.
Despite the hurdles, regulatory action has gained momentum globally. The EU AI Act was adopted in March 2024, and South Korea and China have also introduced AI governance policies, while Brazil is developing its own AI Act. The U.S. approach remains fragmented, with individual states enacting their own laws.
Uuk also expressed disappointment regarding the recent AI Action Summit in Paris, citing its focus on innovation over safety, a departure from prior summits that emphasized AI safety.