Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits

Share This Post


Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Amid an increasingly tense and destabilizing week for international news, it should not escape any technical decision-makers’ notice that some lawmakers in the U.S. Congress are still moving forward with new proposed AI regulations that could reshape the industry in powerful ways — and seek to steady it moving forward.

Case in point, yesterday, U.S. Republican Senator Cynthia Lummis of Wyoming introduced the Responsible Innovation and Safe Expertise Act of 2025 (RISE), the first stand-alone bill that pairs a conditional liability shield for AI developers with a transparency mandate on model training and specifications.

As with all new proposed legislation, both the U.S. Senate and House would need to vote in the majority to pass the bill and U.S. President Donald J. Trump would need to sign it before it becomes law, a process which would likely take months at the soonest.

“Bottom line: If we want America to lead and prosper in AI, we can’t let labs write the rules in the shadows,” wrote Lummis on her account on X when announcing the new bill. We need public, enforceable standards that balance innovation with trust. That’s what the RISE Act delivers. Let’s get it done.”

It also upholds traditional malpractice standards for doctors, lawyers, engineers, and other “learned professionals.”

If enacted as written, the measure would take effect December 1 2025 and apply only to conduct that occurs after that date.

Why Lummis says new AI legislation is necessary

The bill’s findings section paints a landscape of rapid AI adoption colliding with a patchwork of liability rules that chills investment and leaves professionals unsure where responsibility lies.

Lummis frames her answer as simple reciprocity: developers must be transparent, professionals must exercise judgment, and neither side should be punished for honest mistakes once both duties are met.

In a statement on her website, Lummis calls the measure “predictable standards that encourage safer AI development while preserving professional autonomy.”

With bipartisan concern mounting over opaque AI systems, RISE gives Congress a concrete template: transparency as the price of limited liability. Industry lobbyists may press for broader redaction rights, while public-interest groups could push for shorter disclosure windows or stricter opt-out limits. Professional associations, meanwhile, will scrutinize how the new documents can fit into existing standards of care.

Whatever shape the final legislation takes, one principle is now firmly on the table: in high-stakes professions, AI cannot remain a black box. And if the Lummis bill becomes law, developers who want legal peace will have to open that box—at least far enough for the people using their tools to see what is inside.

How the new ‘Safe Harbor’ provision for AI developers shielding them from lawsuits works

RISE offers immunity from civil suits only when a developer meets clear disclosure rules:

  • Model card – A public technical brief that lays out training data, evaluation methods, performance metrics, intended uses, and limitations.
  • Model specification – The full system prompt and other instructions that shape model behavior, with any trade-secret redactions justified in writing.

The developer must also publish known failure modes, keep all documentation current, and push updates within 30 days of a version change or newly discovered flaw. Miss the deadline—or act recklessly—and the shield disappears.

Professionals like doctors, lawyers remain ultimately liable for using AI in their practices

The bill does not alter existing duties of care.

The physician who misreads an AI-generated treatment plan or a lawyer who files an AI-written brief without vetting it remains liable to clients.

The safe harbor is unavailable for non-professional use, fraud, or knowing misrepresentation, and it expressly preserves any other immunities already on the books.

Reaction from AI 2027 project co-author

Daniel Kokotajlo, policy lead at the nonprofit AI Futures Project and a co-author of the widely circulated scenario planning document AI 2027, took to his X account to state that his team advised Lummis’s office during drafting and “tentatively endorse[s]” the result. He applauds the bill for nudging transparency yet flags three reservations:

  1. Opt-out loophole. A company can simply accept liability and keep its specifications secret, limiting transparency gains in the riskiest scenarios.
  2. Delay window. Thirty days between a release and required disclosure could be too long during a crisis.
  3. Redaction risk. Firms might over-redact under the guise of protecting intellectual property; Kokotajlo suggests forcing companies to explain why each blackout truly serves the public interest.

The AI Futures Project views RISE as a step forward but not the final word on AI openness.

What it means for devs and enterprise technical decision-makers

The RISE Act’s transparency-for-liability trade-off will ripple outward from Congress straight into the daily routines of four overlapping job families that keep enterprise AI running. Start with the lead AI engineers—the people who own a model’s life cycle. Because the bill makes legal protection contingent on publicly posted model cards and full prompt specifications, these engineers gain a new, non-negotiable checklist item: confirm that every upstream vendor, or the in-house research squad down the hall, has published the required documentation before a system goes live. Any gap could leave the deployment team on the hook if a doctor, lawyer, or financial adviser later claims the model caused harm.

Next come the senior engineers who orchestrate and automate model pipelines. They already juggle versioning, rollback plans, and integration tests; RISE adds a hard deadline. Once a model or its spec changes, updated disclosures must flow into production within thirty days. CI/CD pipelines will need a new gate that fails builds when a model card is missing, out of date, or overly redacted, forcing re-validation before code ships.

The data-engineering leads aren’t off the hook, either. They will inherit an expanded metadata burden: capture the provenance of training data, log evaluation metrics, and store any trade-secret redaction justifications in a way auditors can query. Stronger lineage tooling becomes more than a best practice; it turns into the evidence that a company met its duty of care when regulators—or malpractice lawyers—come knocking.

Finally, the directors of IT security face a classic transparency paradox. Public disclosure of base prompts and known failure modes helps professionals use the system safely, but it also gives adversaries a richer target map. Security teams will have to harden endpoints against prompt-injection attacks, watch for exploits that piggyback on newly revealed failure modes, and pressure product teams to prove that redacted text hides genuine intellectual property without burying vulnerabilities.

Taken together, these demands shift transparency from a virtue into a statutory requirement with teeth. For anyone who builds, deploys, secures, or orchestrates AI systems aimed at regulated professionals, the RISE Act would weave new checkpoints into vendor due-diligence forms, CI/CD gates, and incident-response playbooks as soon as December 2025.



Source link

Related Posts

- Advertisement -spot_img