Meta’s chief global affairs officer, Joel Kaplan, said on Friday that the U.S. tech giant will not sign the EU’s new voluntary code of practice for general-purpose AI, citing legal uncertainties and measures that go beyond the scope of Europe’s main AI law.
In a statement posted on LinkedIn, Kaplan said the company will not be signing the Code of Practice for General-Purpose AI (GPAI), a set of nonbinding guidelines covering AI transparency, copyright, and security.
Designed for developers of general-purpose AI models, the code aims to help them prepare for, and comply with, the AI Act, which takes effect in stages starting Aug. 2.
“Europe is heading down the wrong path on AI,” Kaplan said. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”
The EU’s AI Act creates one system for all EU countries, dividing AI into four risk levels: unacceptable, high, limited, and minimal. High-risk systems, like those in critical infrastructure or hiring, face strict requirements, including safety checks and documentation.
It covers the regulation of large language models and foundation models built by companies such as Meta’s Llama, OpenAI’s GPT-4, Google DeepMind’s Gemini, and Anthropic’s Claude.
Companies failing to comply could face fines ranging from 7.5 million euros ($8.7 million) or 1.5 percent of turnover, to as much as 35 million euros ($38.2 million) or 7 percent of global turnover.
Businesses Voice Concerns
The release of the GPAI Code was delayed several times before the European Commission published the final version on July 10.
The EU said companies that voluntarily sign the GPAI will face a lighter administrative burden and gain more legal certainty compared to proving compliance through other methods. Last week, ChatGPT-maker OpenAI announced its intention to sign the code.
Kaplan pointed to industry uncertainty over the EU’s AI regulation, citing concerns by 44 of Europe’s largest companies, including Bosch, Siemens, SAP, Lufthansa, Airbus, and BNP.
In an open letter, dozens of top European business leaders earlier this month urged EU officials to postpone key parts of the AI Act by two years, warning that the current rules are too complex and risk undermining Europe’s competitiveness in artificial intelligence.
The group said the AI Act, set to impose new obligations on both high-risk AI systems and general-purpose AI models starting in 2025 and 2026, could stifle innovation if implemented too quickly.
“We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,” Kaplan said.
New Guidelines
Meta’s announcement came on the same day the European Commission published new guidelines explaining how general-purpose AI companies must comply with the EU’s AI Act.
The guidelines list several key requirements, including writing clear technical documentation, explaining what data was used to train the models, setting copyright policies, and protecting AI systems from misuse or hacking.
For the most advanced AI models that could pose risks to public safety, human rights, or society, developers will also need to run safety tests, reduce potential harms, and report serious incidents to EU regulators.
“By providing legal certainty on the scope of the AI Act obligations for general-purpose AI providers, we are helping AI actors, from start-ups to major developers, to innovate with confidence, while ensuring their models are safe, transparent, and aligned with European values,” the EU’s executive vice-president for tech sovereignty, security, and democracy, Henna Virkkunen, said in a statement.
To support innovation, the commission said companies making significant modifications to existing models will only need to document the changes and the new training data used, rather than provide full documentation of the entire model.
Officials said this approach is designed to enable most developers to build on existing models without facing excessive regulation.