Large language models (LLMs) are making their way into U.S. Department of Defense (DoD) workflows, including pilots inside Impact Level 5 (IL5) and Impact Level 6 (IL6) environments. IL5 covers controlled unclassified information (CUI) and mission-sensitive data, while IL6 extends to secret-level classified information. These designations represent some of the DoD’s most secure networks.
It’s tempting to believe that running an LLM in IL5 or IL6 with read-only data access guarantees safety. But that assumption ignores a critical reality: prompt injection attacks don’t target networks or permissions; they target the model’s logic. Even a “read-only” LLM inside the most secure enclave can be manipulated into leaking information or ignoring policy. This blog post explains why IL5/6 protections aren’t enough, how prompt injection works, and what steps DoD cybersecurity teams need to take.
IL5 and IL6 accreditation ensure strong network and data protections. They’re designed to keep adversaries out and safeguard mission-critical systems. But application-layer threats bypass perimeter defenses entirely. Prompt injection exploits the way LLMs process instructions, not their network context. A model in IL6 can still be tricked if a malicious or misleading prompt reaches it. The result isn’t a traditional network breach; it’s the AI system itself becoming the attack vector.
Prompt injection is simple in concept and devastating in practice. Instead of hacking code, the attacker provides crafted text that causes AI to override its rules or expose information. LLMs don’t inherently distinguish between “safe” system instructions and malicious ones if they’re presented together.
Real-world cases show how easily this can happen:
A common mitigation is giving LLMs only read-only access to data. That reduces the risk of LLMs altering systems, but it doesn’t prevent them from leaking information they’ve read. A prompt injection can make an AI model summarize or dump entire sensitive documents, even if it wasn’t supposed to expose them.
To minimize risk, many DoD pilots are using retrieval-augmented generation (RAG). Instead of pre-training LLMs on sensitive corpora, RAG fetches only relevant snippets from curated databases per query. This reduces exposure and aligns with data minimization principles. RAG has clear benefits as it keeps most sensitive data out of the model’s long-term memory, grounds answers in vetted content, and supports auditability. However, RAG doesn’t eliminate prompt injection.
Securing LLMs ultimately requires a mindset shift: treat the AI as untrusted until proven otherwise. Applying zero trust to LLMs means verifying and limiting every input, treating outputs as untrusted until scanned and approved, minimizing what the model can see or do, and monitoring every interaction for anomalies.
In many DoD use cases, users interact with LLMs via vendor-hosted APIs (for example, calling OpenAI or Azure OpenAI endpoints from an application). This API layer introduces its own set of security concerns, including model abuse, over-permissioned tokens, injection payloads via JSON, and endpoint sprawl. F5 Distributed Cloud Web App and API Protection (WAAP) solutions address these challenges by discovering AI-related API endpoints, enforcing schema validation, detecting anomalies, and blocking injection attempts in real time.
Today, most DoD LLM usage connects to vendor-hosted models. These outbound AI queries create a blind spot: encrypted TLS traffic carrying potentially sensitive prompts and responses. F5 BIG-IP SSL Orchestrator addresses this by decrypting and orchestrating outbound traffic so it can be inspected against policy. BIG-IP SSL Orchestrator ensures DoD teams can see exactly what data is sent to external AI services, apply data loss prevention (DLP) rules to prevent leaks, and audit all AI interactions.
As the DoD moves toward hosting internal LLMs on IL5/IL6 infrastructure, F5 AI Gateway becomes the enforcement point that keeps every prompt and answer within defined guardrails—a zero trust checkpoint for AI behavior. It can block prompt injection in real time, enforce role-based data access, and log every interaction for compliance.
Generative AI offers huge mission advantages, but only if adopted with eyes open. IL5/6 won’t save you from prompt injection but a layered, zero trust approach can. DoD teams should integrate AI usage into zero-trust architectures now, monitor aggressively, and enforce controls on AI data flows just as you do for sensitive human communications.
For more information, visit the F5 public sector solutions web page.