The EU AI Act isn’t enough to save humanity from extinction, Stuart Russell, a world-renowned AI expert told Euractiv.
AI regulation is falling out of fashion globally, as countries and regions race for geopolitical dominance in artificial intelligence. The EU is no exception. Under mounting industry pressure, the European Commission is considering a pausing AI Act implementation, and activists fear a dilution of its general-purpose AI (GPAI) provisions in a new Code of Practice expected to be published in the coming days.
Russell, a professor of computer science at UC Berkeley, joined a last-ditch open letter urging the EU to ‘resist pressure’ from industry’s final push to derail the Code, after a year of intense lobbying.
“[To industry], it doesn’t matter what the document says. The companies want to have no regulation at all,” Russell told Euractiv.
To Russell and fellow signatories – including Nobel laureates Geoffrey Hinton and Daron Acemoglu – that’s a recipe for disaster. They are calling for mandatory third-party audits to be baked into the Code, ensuring companies can’t simply claim that GPAIs like ChatGPT are safe without checks.
But the AI Act in its strongest form is too lenient to protect from future risks, according to Russell. “Even if your system is incredibly dangerous… there’s nothing in the rules that say you can’t access the market,” he warned.
“Once you have systems that can take control of our civilization and planet, then fining a one-digit percentage of the maximum of global revenues is ridiculous,” he added.
Extinction
Russell’s view is controversial.
The author of the leading textbook on AI belongs to a growing group of pioneers who now believe the technology poses an existential threat. Others dismiss this “AI doomerism” as speculative science fiction.
“I think it’s bizarre. The press keeps characterising these [existential AI] risks as fringe… but if you look at the top five CEOs or top five AI researchers in the world, with the exception of Yann LeCun, every single one says: No, this is real.”
Even European Commission President Ursula von der Leyen cited AI “extinction risks” in a 2023 speech. In May, she warned that AI could “approach human reasoning” as early as next year.
Yet no meaningful actions have been taken to address such risks, Russell said. He fears that real regulation will only come in response to a “Chernobyl-sized disaster.”
Real regulation, he argues, would involve safety proofs akin to those required for nuclear plants, but with higher safety thresholds.
“But you’re not going to get anything close to a mathematical guarantee,” Russell said. “Companies haven’t the faintest idea how their systems work.”
For now, all he can hope for is mandatory external tests in the EU’s upcoming Code of Practice.
“It wouldn’t be enough… but it would help considerably,” he added.
(nl, de)