NSA/CSS - National Security Agency - Central Security Service

04/30/2026 | Press release | Distributed by Public on 04/30/2026 20:07

NSA joins the ASD’s ACSC and Others to Release Guidance on Agentic Artificial Intelligence Systems

FORT MEADE, Md. - Today, the National Security Agency (NSA) joins the Australian Signals Directorate's Australian Cyber Security Centre (ASD's ACSC) and others to release the Cybersecurity Information Sheet (CSI), "Careful Adoption of Agentic AI Services."

This report is a comprehensive guide to understanding and mitigating the unique risks associated with the rise of agentic artificial intelligence (AI) within critical infrastructure, including the defense sector. The CSI highlights general security considerations for agentic AI, including the inherited risks of large language models (LLMs), increased attack surfaces, increased complexity, the evolving security landscape as the technology matures, and the need to address AI security as part of established cybersecurity paradigms.

Unlike traditional generative AI, which typically requires human validation, agentic AI systems are designed to operate autonomously, making them a powerful tool. This presents both unprecedented opportunities and significant cybersecurity challenges organizations must address to protect national security and critical infrastructure.

"Careful Adoption of Agentic AI Services" outlines risk spaces to consider, including:

• Privilege Risks: Over-privileged agents can amplify the impact of a single compromise.
• Design and Configuration Risks: Insecure design and provisioning can introduce vulnerabilities.
• Behavior Risks: Goal misalignment, specification gaming, deceptive behavior, and emergent capabilities can lead to unexpected or undesirable outcomes.
• Structural Risks: The interconnected nature of agentic systems increases the attack surface and complexity.
• Accountability Risks: The opacity of agentic systems makes accountability hard to trace, complicating auditing and compliance.

Securing agentic AI systems requires proactive measures that address risks introduced by autonomy, interconnected components, and evolving capabilities. The best practices for securing agentic AI systems are divided into the following subcategories:
• Designing Secure Agents
• Developing Secure Agents
• Managing Third-Party Components
• Deploying Agents Securing
• Operating Agents Securely

The report recommends deploying agentic AI incrementally, continuously assessing against evolving threat models, and maintaining strong governance, explicit accountability, rigorous monitoring, and human oversight which are essential for safe and secure operation.

Organizations that use agentic AI services, including those in the defense sector, are encouraged to review this guidance and adopt the outlined cybersecurity mitigations.

Other agencies co-sealing this CSI are the Canadian Centre for Cyber Security (Cyber Centre), the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom National Cyber Security Centre (NCSC-UK).

Read the full report here.

Visit our full library for more cybersecurity information and technical guidance.

NSA Media Relations | [email protected] | 443-634-0721


About the National Security Agency

Founded in 1952, NSA is a U.S. Department of War combat support agency and element of the U.S. Intelligence Community. The Agency's mission is to provide foreign signals intelligence to policy makers and our military, and to prevent and eradicate cybersecurity threats to U.S. national security systems, with a focus on the Defense Industrial Base and the improvement of U.S. weapons' security. From protecting U.S. warfighters around the world to enabling and supporting operations on land, in the air, at sea, in space, and in the cyber domain, NSA is committed to building public trust through transparency and protecting civil liberties and privacy consistent with our nation's values. 
NSA/CSS - National Security Agency - Central Security Service published this content on April 30, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on May 01, 2026 at 02:07 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]