The FDA Is Using an AI to “Speed Up” Drug Approvals and Insiders Say It’s Making Horrible Mistakes

Share This Post


Image by Getty / Futurism

Insiders at the Food and Drug Administration are ringing alarm bells over the agency’s use of an AI to fast-track drug approvals.

As CNN reports, six current and former FDA officials are warning that the AI, dubbed Elsa, which was unveiled weeks earlier, is “hallucinating” completely made-up studies.

It’s a terrifying reality that could, in a worst-case scenario, lead to potentially dangerous drugs mistakenly getting the stamp of approval from the FDA.

It’s part of a high-stakes and greatly accelerated effort by the US government to embrace deeply flawed AI tech. Elsa, much like other currently available AI chatbots, often makes stuff up.

“Anything that you don’t have time to double-check is unreliable,” one FDA employee told CNN. “It hallucinates confidently.”

Health and Human Services secretary Robert Kennedy Jr, a noted figure of the anti-vaccine movement, who has no relevant credentials for the job and frequently furthers discredited conspiracy theories, lauded the administration’s embrace of AI as a sign that the “AI revolution has arrived.”

“We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,” he told Congress last month.

But reality is rapidly catching up — which shouldn’t be a surprise to anybody who’s used a large language model-based tool before. Given the tech’s track record so far, the medical community’s embrace of AI has already been mired in controversy, with critics pointing out the risks of overrelying on the tech.

Instead of saving scientists time, Elsa is doing the exact opposite, highlighting a common refrain among companies’ already backfiring attempts to shoehorn the tech into every aspect of their operations.

“AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have,” a second FDA employee told CNN.

The insiders claim that Elsa can’t really help them review drugs for approvals since it doesn’t have access to relevant documentation. It can’t even “answer basic questions,” such as how many times a company filed for FDA approval, according to CNN.

Worse yet, it often cites studies that don’t exist, and when challenged, it ends up being “apologetic” and tells employees that its output needs to be verified.

The damning claims fly in the face of the FDA’s attempts to paint Elsa as a revolutionary tool that can significantly speed up drug evaluations.

In a June statement, the FDA boasted that it was “already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets.”

Meanwhile, FDA’s head of AI, Jeremy Walsh, told CNN that it’s possible Esla “could potentially hallucinate,” saying employees “don’t have to use” the tool “if they don’t find it to have value.”

In an attempt to reassure, Walsh said that Elsa’s hallucinations could be mitigated by using more detailed questions.

In many ways, Elsa couldn’t have come at a worse time as Congress is racing to figure out how to approach AI regulation. Instead of implementing new rules to avoid a disaster, the Trump administration has remained far more focused on clearing a regulatory path as tens of billions of dollars continue to be poured into the industry.

In other words, there doesn’t appear to be much interest in reining in the FDA’s use of unproven AI tech — a concerning bet that could one day come home to roost.

More on the FDA: The FDA Is Already Outsourcing Drug and Food Analysis to Error-Plagued AI Chatbot



Source link

Related Posts

- Advertisement -spot_img