Major tech companies have secured privileged access to influence a key AI Act implementation document currently being drafted, Corporate Europe Observatory and Lobby Control have alleged.
The EU’s General-Purpose AI Code of Practice will set a global standard for AI systems like ChatGPT, and is in a critical phase of drafting, with the latest version incorporating several industry-friendly amendments.
“Big Tech enjoyed structural advantages from early on in the process and – playing its cards well – successfully lobbied for a much weaker code,” the investigation reads, citing “insider interviews and analysis of lobby papers.”
The Commission oversees the process of drafting the code, but 13 expert chairs are writing the text.
Through “provider workshops,” tech companies were granted private sessions with the chairs, while other participants were mostly restricted to written input, the report highlights.
In joint meetings, “controversial questions would sometimes be side-stepped,” and summaries of industry workshops were not shared despite a Commission promise to do so, participants told the report authors.
No one received any “structural favouring”, Commission spokesperson Thomas Regnier told Euractiv. All participants “had the same opportunity to engage in the process through the same channels,” and the chairs also had dedicated workshops with civil society, downstream providers, and SMEs, he said.
However, there has been four provider workshops and only one of each of the others, according to Commission websites.
Tech companies have also claimed they are under-represented in the process, making up only around 5% of participants.
The report also raises concerns about potential conflicts of interest involving the three consultancies involved.
Wavestone works with Google Cloud and AWS, and won Microsoft’s “partner of the year” in 2024. Intellera’s current parent company, Accenture, is described as “a key partner” of many of the companies involved, while the think tank CEPS counts several of them among its corporate members.
According to Laura Nicolas, membership coordinator at CEPS, none of the mentioned companies have access to, or are involved, with any CEPS task force. The report’s claim that members give “input on CEPS research priorities” refers to an anonymous annual survey, she added.
Neither the Commission nor the two consultancies responded to Euractiv’s request for comment in time for publication.
Discrimination at play
The NGOs published written input from Meta, Microsoft and Google, revealing that Google and Microsoft opposed the inclusion of discrimination risks in the code’s risk taxonomy, using near-identical language.
All three companies argued that the code’s copyright requirements go beyond the AI Act.
In the next draft, discrimination risks were made optional, and copyright obligations were softened. The final version is due 2 May, although the Commission has hinted that there could be delays.
(aw, jp)