Top AI companies suffer from poor AI risk management, says French non-profit – Euractiv

Share This Post


Lead AI developers are poor at risk management according to a rating published by SaferAI on Wednesday (2 October), with French company Mistral AI scoring among the worst. 

SaferAI, a French non-profit that aims to “incentivise the development and deployment of safer AI systems,” rated the risk management practices of Anthropic, OpenAI, Google Deepmind, Meta, Mistral, and xAI as moderate or worse. 

SaferAI CEO Simeon Campos told Euractiv, “The reason we don’t see large-scale AI harms is that AI systems don’t yet have high enough capabilities to cause such harms, not that companies do proper risk management.”

As technology advances at an “astonishing rate,” there is an “urgent need for robust risk management practices in the AI industry,” he added.

The companies were graded on risk identification, tolerance and analysis, and mitigation, dealing with specific questions such as red teaming and quantification of risk thresholds. 

Anthropic, OpenAI, Google, and Deepmind score moderately well, with Meta not far behind. Their scores were primarily driven by their ratings in risk identification due to safety testing and red teaming exercises, but they vary in how actively they analyse and mitigate the risks they find. Meta scored as “very weak” on both risk analysis and mitigation.

Meanwhile, Mistral and xAI score “non-existent” on all points except for a “very weak” 0.25/5 on identification.

Meta, Mistral, and xAI have released their models as open source, meaning they allow direct access to modify and use the model rather than releasing them through interfaces. SaferAI’s website says this is “not inherently problematic” but is irresponsible when lacking “thorough threat and risk modelling.”

The companies did not respond to Euractiv’s request for comment by the time of publication.

“I strongly encourage the development of initiatives like this one, which aim to improve our collective ability to assess and compare companies’ safety approaches”, said Yoshua Bengio, a Turing award winner and leading AI researcher, according to SaferAI’s press release.

Bengio is also the chair of a key working group drafting a Code of Practice with the Commission’s AI Office that will detail what risk management measures providers of general-purpose AI (GPAI) should take to comply with the EU AI Act.

Meanwhile, the AI Office has been hiring staff to increase its technical capabilities around risk management.  

“A substantive part of the AI Office’s Regulation and Compliance unit is already focused on addressing the risks associated with generative AI, particularly those stemming from GPAI,” a Commission spokesperson told Euractiv in an email. 

The spokesperson said these have mostly legal and policy backgrounds, but the Commission is working to hire more technical people.

“A good handful of people has joined the technical safety unit,” and “the recruitment of 25 technology specialists who mostly have a technical background, with degrees in computer science/engineering, with a great number of them also holding PhDs, is ongoing,” the spokesperson said. 

However, stakeholders have questioned the manner and speed with which the Commission is staffing the office, along with its technical competences. 

[Edited by Eliza Gkritsi/Alice Taylor-Braçe]





Source link

Related Posts

Get this super Amazon Music Unlimited deal with three months free – but hurry!

Amazon’s Black Friday week is here and there’s...

Henry Kissinger Issues Warning From Beyond the Grave

Are we doomed?Midas TouchIt's been nearly a year...

Black Friday SSD deals: What to expect and early sales

SSDs have cost more throughout 2024, but Black...

Is social media doing more harm than good to democracy?

In the U.K., The Guardian newspaper announced earlier...
- Advertisement -spot_img