NIST - National Institute of Standards and Technology

05/05/2026 | Press release | Distributed by Public on 05/05/2026 05:30

CAISI Signs Agreements Regarding Frontier AI National Security Testing With Google DeepMind, Microsoft and xAI

WASHINGTON - Today, the Center for AI Standards and Innovation (CAISI) at the Department of Commerce's National Institute of Standards and Technology announced new agreements with Google DeepMind, Microsoft and xAI. Through these expanded industry collaborations, CAISI will conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security. These agreements build on previously announced partnerships, which have been renegotiated to reflect CAISI's directives from the secretary of commerce and America's AI Action Plan.

Under the direction of Secretary Howard Lutnick, CAISI has been designated to serve as industry's primary point of contact within the U.S. government to facilitate testing, collaborative research and best practice development related to commercial AI systems.

CAISI's agreements with frontier AI developers enable government evaluation of AI models before they are publicly available, as well as post-deployment assessment and other research. To date, CAISI has completed more than 40 such evaluations, including on state-of-the-art models that remain unreleased.

"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said CAISI Director Chris Fall. "These expanded industry collaborations help us scale our work in the public interest at a critical moment."

These agreements support information-sharing, driving voluntary product improvements and ensuring a clear understanding in government of AI capabilities and the state of international AI competition. To thoroughly evaluate national security-related capabilities and risks, developers frequently provide CAISI with models that have reduced or removed safeguards. Evaluators from across government may participate in evaluations and regularly provide feedback through the CAISI-convened TRAINS Taskforce, a group of interagency experts focused on AI national security concerns. The agreements support testing in classified environments and were drafted with the flexibility required to rapidly respond to continued AI advancements.

NIST - National Institute of Standards and Technology published this content on May 05, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on May 05, 2026 at 11:31 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]