12/04/2025 | Press release | Distributed by Public on 12/05/2025 11:00
WASHINGTON, D.C. - Today, Delaware's Representative Sarah McBride (D-DE) and Representative Jay Obernolte (R-CA) introduced the Resources for Evaluating and Documenting AI (READ AI) Models Act, bipartisan legislation to put basic transparency at the center of how artificial intelligence is evaluated and deployed.
Right now, most users of AI - from school districts to small businesses to local governments - have no consistent way to document how an AI model works, what data went into it, or how it was tested. Large tech companies often produce detailed documentation, but smaller teams may not have the staff or resources to do the same.
The READ AI Models Act compliments NIST's AI Risk Management Framework by creating a modular, easy-to-use "nutrition label" for AI models - helping developers, researchers, and government agencies consistently disclose the information needed to assess safety, performance, and risk of AI models. The bill also directs NIST to build an accompanying pilot tool and issue technical guidance to support implementation. All components would be built through a consensus-based process and adoption would be voluntary.
"Artificial intelligence is shaping everything from how hospitals manage data to how small businesses streamline tasks," said Rep. McBride. "But right now, only big tech companies have the manpower and capacity to steadily compile and release information about the AI models they build and deploy. Smaller teams - including government agencies, schools, and nonprofits - are left guessing and may lack the resources to accomplish the same.
"The READ AI Models Act brings transparency to AI in a way that's simple, consistent and accessible. Think of it as a clear, easy-to-read snapshot - a 'nutrition label' for an AI model. It levels the playing field and helps people make informed decisions. That's what transparency is all about. This bill is voluntary, bipartisan, and grounded in NIST's world-class expertise."
"As AI is deployed in every sector of our economy, it's essential that our sectoral regulators have the tools and information they need to oversee it effectively," said Rep. Obernolte. "The READ AI Models Act helps ensure agencies can understand and evaluate the models operating in their domains, strengthening our ability to manage risks while encouraging innovation. I'm proud to support this important step toward responsible, sector-specific AI governance."
The bill is designed not for major tech companies - which already produce evaluation documentation - but for the many small teams now responsible for evaluating and deploying AI in their own workplaces. Across federal, state, and local governments - and in countless small businesses, school systems, and nonprofits - staff are being asked to wear multiple hats. Many are now tasked with overseeing procurement, IT, cybersecurity, and AI deployment all at once. This legislation will give those teams a practical, trusted resource for evaluating and comparing models. Delawareans applauded the bill's introduction.
"As AI becomes more deeply embedded in how we work and live, we need clear, trustworthy information about the systems powering that change," said Delaware State Representative Krista Griffith, Chair of the Delaware AI Commission. "This legislation is a smart, practical step forward that complements the Delaware AI Commission's efforts to set strong, future-focused standards for this fast evolving technology."
"Establishing clear, credible guidelines for how AI models are documented and evaluated is mission-critical for U.S. leadership in AI," said Sunita Chandrasekaran, Ph.D., Director, First State AI Institute."With NIST's involvement, the READ AI Models Act is poised to set a trusted standard for both the public and private sectors-ensuring developers can consistently disclose how models are trained, validated, and tested before deployment. Without these guidelines, we risk fragmentation and bias."
"As a member of Delaware's AI Commission, I strongly support Congresswoman McBride's READ AI Models Act," said Patrick Callahan, member of the Delaware AI Commission."In working with organizations implementing AI, the biggest challenge is building trust through transparency. NIST's voluntary documentation framework addresses exactly this need, particularly for smaller organizations and public sector entities that lack the resources of major tech companies. Delaware is building a regulatory sandbox to enable safe AI innovation, and this federal framework would complement that work. The modular approach recognizes that a healthcare AI system needs different documentation than a financial services model. This is the kind of consensus-based policy that will accelerate responsible AI adoption nationwide."
The READ AI Models Act would:
McBride serves on the House Science, Space, and Technology Committee and is a Member of the bipartisan Artificial Intelligence Caucus where she champions practical, trustworthy AI governance policy that strengthens innovation while protecting the public.
###