OWASP top 10 for LLM F5 alignment

The OWASP Top 10 for LLM Applications (2025) underscores how AI introduces new security and governance risks while raising the stakes for traditional application protection. As organizations move from AI pilots to production, security teams need controls that can keep pace with evolving adversarial techniques and still meet privacy and compliance requirements.

The following sections map each OWASP LLM risk area to relevant F5 capabilities: F5 AI Guardrails for model-agnostic runtime protection and policy enforcement across AI interactions, F5 Web App and API Protection (WAAP) to secure the application and API surfaces that enable AI workflows, and F5 AI Red Team to proactively evaluate model and application vulnerabilities before attackers do.

LLM01: Prompt Injection

Prompt injection occurs when user inputs manipulate a model into harmful behavior like revealing sensitive information, providing unauthorized access, or generating biased or toxic content. F5 AI Guardrails these risks by inspecting both prompts and model outputs for malicious activity and blocking dangerous or noncompliant interactions. It provides out-of-the-box protections for thousands of known prompt injection techniques, and an interface for quickly creating custom guardrails as new threats emerge.

LLM02: Sensitive Information Disclosure

Sensitive information disclosure is a two-way risk: AI applications may expose unauthorized data to users, and users may submit sensitive information—such as PII or proprietary data—that the system is not permitted to handle. F5 AI Guardrails addresses both by filtering sensitive data types, including PII, PCI, and PHI, to prevent noncompliant interactions. Teams can also define custom data patterns to monitor or block user inputs, model outputs, or both. In addition, AI Guardrails supports traditional access management functions like RBAC group provisioning to tailor policy enforcement by user or group.

LLM03: Supply Chain

Supply chain vulnerabilities in LLMs threaten the integrity of training data, models, and deployment platforms. The most significant risk stems from adopting pre-trained third-party models and low-rank adapters (LoRAs) used to fine-tune a base model without retraining it. With millions of models and LoRAs available on platforms like Hugging Face, the attack surface expands dramatically, creating opportunities for tampering and data poisoning across countless model variations. F5 AI Guardrails creates a consistent layer of policy enforcement across AI applications, regardless of model source. Deployed as a proxy that inspects interactions between users, agents, and applications, AI Guardrails applies common runtime security controls to mitigate risks from biased training data, model variability, and potential exploits before harmful outputs reach users. Additionally, F5 AI Red Team conducts large-scale attack simulations to evaluate the specific risks and behavioral tendencies of individual models. Because risk profiles can vary widely even within the same model family, AI Red Team delivers rapid, actionable insight into vulnerabilities before attackers can exploit them.

LLM04: Data and Model Poisoning

Data and model poisoning involves malicious manipulation of training, fine-tuning, or retrieval data that can degrade output quality, introduce bias, or embed backdoor triggers. F5 mitigates poisoning risks by protecting exposed application and API surfaces with F5 Web App and API Protection (WAAP). F5 WAAP capabilities such as Bot Defense and DDoS Mitigation reduce automated injection of malicious content into ingestion and feedback workflows, while API Discovery and Security help identify and govern data collection and retrieval APIs, one of the most common attack paths used to influence model behavior. In addition, F5 AI Guardrails can act as a runtime compensating control by detecting and blocking known poisoning indicators, such as trigger phrases or patterns, through custom guardrails.

LLM05: Improper Output Handling

Improper output handling occurs when LLM-generated outputs are insufficiently validated or sanitized before being passed to downstream systems. Because attackers can influence model outputs through crafted prompts, this can lead to indirect access to sensitive functionality and downstream impacts such as XSS or CSRF in browsers, or SSRF, privilege escalation, and remote code execution in backend systems. F5 AI Guardrails mitigates this risk by inspecting model outputs before they are delivered to users or routed to downstream tools, blocking or logging unsafe content based on defined policies. F5 Web App and API Protection (WAAP) further reduces exposure by applying established web and API security controls—including API discovery, schema validation, and threat protections—to the applications and APIs that consume LLM output, preventing malicious payloads from becoming executable attacks.

LLM06: Excessive Agency

Excessive agency occurs when a model or agent is granted more autonomy, permissions, or tool access than necessary, allowing manipulated inputs to trigger unintended actions such as sending messages, executing transactions, or exfiltrating data. F5 AI Guardrails mitigates this risk by enforcing policy-based limits on what models can request or output, and by blocking noncompliant interactions before they reach tools or downstream systems. F5 Web App and API Protection (WAAP) provides an additional safeguard by enforcing authentication, authorization, and rate limiting on tool and plugin APIs, ensuring underlying services remain protected even if an agent attempts an unsafe action.

LLM07: System Prompt Leakage

System prompt leakage occurs when attackers extract hidden instructions, credentials, or policy logic embedded in system prompts and use that knowledge to bypass controls or escalate attacks. F5 AI Guardrails mitigates this risk by detecting and blocking common extraction patterns in user inputs and filtering responses to prevent the model from returning restricted prompt content. F5 AI Red Team proactively tests models and applications for prompt leakage and system prompt-extraction susceptibility at scale, providing actionable findings to harden prompts, policies, and runtime controls before deployment.

LLM08: Vector and Embedding Weaknesses

Vector and embedding weaknesses arise in Retrieval-Augmented Generation (RAG) and embedding-based systems when attackers manipulate, poison, or exploit retrieval content to influence responses, trigger sensitive data exposure, or bypass safety controls. F5 Web App and API Protection (WAAP) helps mitigate these risks by protecting the APIs used to ingest documents, manage collections, and perform retrieval queries, including API discovery, schema enforcement, and bot and abuse prevention. F5 AI Guardrails further reduces downstream impact by applying runtime inspection and custom policies to model outputs, helping detect suspicious patterns, unsafe content, or sensitive data before responses are delivered to users.

LLM09: Misinformation

Misinformation risk occurs when an LLM produces incorrect or misleading outputs that can drive unsafe decisions, reputational harm, or compliance failures. F5 AI Guardrails mitigates this risk with content moderation guardrails that flag or block high-risk content categories, require specific response structures, or trigger additional controls when content appears uncertain or noncompliant. F5 AI Red Team evaluates how AI systems behave under adversarial questioning and edge cases, helping teams quantify misinformation tendencies and tune policies and workflows before production rollout.

LLM10: Unbounded Consumption

Unbounded consumption occurs when attackers or misconfigured clients drive excessive model usage, leading to resource exhaustion, degraded availability, or unexpected cost amplification through high-volume requests, oversized prompts, or abusive automation. F5 Web App and API Protection (WAAP) mitigates this risk by enforcing rate limiting and DDoS mitigation at the application and API layer to reduce abusive traffic before it reaches model infrastructure. Combined with AI-facing API discovery and security controls, F5 WAAP helps contain both availability and cost risks associated with uncontrolled LLM usage.

AI security is a moving target

Securing LLM applications requires more than point-in-time testing or model-provider defaults. An effective AI security strategy demands continuous runtime policy enforcement and robust protection of the APIs, tools, and data systems that LLMs depend on. Across the OWASP Top 10 for LLM Applications (2025), F5 delivers by combining:

  1. F5 AI Guardrails - Adaptive, model-agnostic runtime security with F5 AI Guardrails to prevent data leakage, adversarial manipulation, and harmful outputs
  2. F5 AI Red Team - Proactive testing through F5 AI Red Team to uncover model and workflow-specific weaknesses early and translate findings into active defenses
  3. F5 Web App and API Protection (WAAP) - Application and API controls to reduce exposure, abuse, and resource risk

Together, these layers enable enterprises to scale AI while maintaining resilience, compliance, and user trust as LLM-driven experiences evolve.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
OWASP top 10 for LLM F5 alignment | F5