09/15/2025 | News release | Distributed by Public on 09/15/2025 04:56
In many DoD use cases, users interact with LLMs via vendor-hosted APIs (for example, calling OpenAI or Azure OpenAI endpoints from an application). This API layer introduces its own set of security concerns, including model abuse, over-permissioned tokens, injection payloads via JSON, and endpoint sprawl. F5 Distributed Cloud Web App and API Protection (WAAP) solutions address these challenges by discovering AI-related API endpoints, enforcing schema validation, detecting anomalies, and blocking injection attempts in real time.
Today, most DoD LLM usage connects to vendor-hosted models. These outbound AI queries create a blind spot: encrypted TLS traffic carrying potentially sensitive prompts and responses. F5 BIG-IP SSL Orchestrator addresses this by decrypting and orchestrating outbound traffic so it can be inspected against policy. BIG-IP SSL Orchestrator ensures DoD teams can see exactly what data is sent to external AI services, apply data loss prevention (DLP) rules to prevent leaks, and audit all AI interactions.
As the DoD moves toward hosting internal LLMs on IL5/IL6 infrastructure, F5 AI Gateway becomes the enforcement point that keeps every prompt and answer within defined guardrails-a zero trust checkpoint for AI behavior. It can block prompt injection in real time, enforce role-based data access, and log every interaction for compliance.
Generative AI offers huge mission advantages, but only if adopted with eyes open. IL5/6 won't save you from prompt injection but a layered, zero trust approach can. DoD teams should integrate AI usage into zero-trust architectures now, monitor aggressively, and enforce controls on AI data flows just as you do for sensitive human communications.
For more information, visit the F5 public sector solutions web page.