Understand the top risks facing autonomous AI agents and how F5 helps secure agent behavior, tools, APIs, and workflows.
The OWASP Agentic AI Top 10 helps organizations understand the most important security risks introduced by autonomous AI systems operating across tools, APIs, and data sources. As agentic workflows expand the attack surface during execution, the framework provides a practical way to assess risk, strengthen controls, and reduce exposure.
This page maps those risks to how F5 capabilities can help address them in practice.
The OWASP Agentic AI Top 10 helps organizations understand the most important security risks introduced by autonomous AI systems operating across tools, APIs, and data sources. As agentic workflows expand the attack surface during execution, the framework provides a practical way to assess risk, strengthen controls, and reduce exposure.
This page maps those risks to how F5 capabilities can help address them in practice.
Published by the OWASP GenAI Security Project, the framework focuses on risks that emerge when AI systems are given goals, memory, tools, and the ability to take actions across systems. Unlike traditional chatbot risks, agentic AI threats can spread across multi-step workflows, connected applications, and autonomous decisions. Common examples include agent goal hijacking, tool misuse, rogue agents, insecure inter-agent communication, and cascading failures.
Many organizations are still piloting agentic AI, but early workloads are already connecting to production systems. That means failures can now affect real data, APIs, workflows, and users. The framework gives CISOs, architects, and security teams a practical way to identify where risk is highest, align controls, and build safer AI systems from the start.
The framework organizes agentic AI risk into ten categories:

ASI01 Agent Goal Hijack – An attacker manipulates an agent’s objectives, instructions, or decision path so it pursues unintended outcomes.

ASI02 Tool Misuse and Exploitation – An agent uses connected tools in unsafe ways, or attackers exploit tool interfaces to gain access or cause harm.

ASI03 Identity and Privilege Abuse – Agents misuse credentials, tokens, or inherited permissions to access systems or data beyond intended limits.

ASI04 Agentic Supply Chain Vulnerabilities – Risks introduced through third-party tools, plugins, registries, MCP servers, or external components used in agent workflows.

ASI05 Unexpected Code Execution – An agent generates, modifies, or runs code or commands in ways that create security or operational risk.

ASI06 Context Management and Retrieval Manipulation – Retrieved or stored context is poisoned, misleading, stale, or tampered with, influencing future agent behavior.

ASI07 Insecure Inter-Agent Communication – Agents exchange messages without sufficient authentication, integrity, or policy controls, creating opportunities for misuse.

ASI08 Cascading Failures – A single error, compromise, or bad decision spreads across connected agents, tools, or workflows.

ASI09 Human-Agent Trust Exploitation – Agents use persuasive or misleading outputs to influence users into unsafe actions or approvals.

ASI10 Rogue Agents – Compromised, misaligned, or drifting agents continue operating in unintended ways inside complex systems.