Enterprises are discovering that securing AI requires purpose-built solutions. Traditional controls struggle when models behave dynamically, agents act autonomously, and risks rapidly evolve after deployment. Now that generative AI is actively in production, security, risk, and governance leaders are being asked to manage threats that surface in real time, often beyond the reach of legacy application security approaches.
Analyst research reflects this shift, offering different perspectives on how organizations should adapt their security strategies for AI-enabled systems. Rather than presenting these perspectives as a single point of agreement, this piece looks at how distinct analyst frameworks across Gartner, Forrester, and KuppingerCole approach AI risk, and how organizations can translate those perspectives into runtime guardrails and AI-specific red teaming.
AI risk at runtime
Across analyst research, one reality is increasingly clear: AI risk is dynamic. Unlike traditional applications, AI systems can change behavior based on prompts, context, and downstream integrations, which introduces new attack paths long after deployment.
While various analysts frame the challenge differently, their research highlights recurring considerations for enterprises deploying AI at scale:
- Many AI threats materialize during live use
- Model-provider safeguards alone are insufficient for enterprise governance
- Oversight requires continuous visibility, traceability, and enforcement
- Testing must evolve to reflect agentic, multi-step, and adaptive attacks
These conditions have pushed leading research to cover security controls that operate at runtime, where AI systems interact with users, data, and business workflows. This is also why we are officially introducing F5 AI Guardrails and F5 AI Red Team as two new solutions within the F5 Application Delivery and Security Platform (ADSP) designed to secure enterprise AI systems at runtime.
Gartner® Market Guide for AI Trust, Risk and Security Management
In its Market Guide for AI Trust, Risk and Security Management, Gartner describes “AI leaders must work with stakeholders across the organization to manage AI trust, risk, and security. They should establish an organizational structure for creating and updating gAI governance policies and for evaluating and implementing AI TRiSM technologies that enable and enforce these policies. This will help the organization systematically ensure safer, and more reliable, trustworthy and secure AI use.”
Our view for this, and why we attribute TRiSM as a leading reference point for AI security programs, is due to how it treats AI systems as ongoing operational assets that require continuous oversight, not one-time validation. Furthermore, a core component of the TRiSM framework is runtime inspection and enforcement, which Gartner defines as:
"Applied to models, applications and agent interactions to support transactional alignment with organizational governance policies. Applicable connections, processes, communications, inputs and outputs are inspected for violations of policies and expected behavior. Anomalies are highlighted and either blocked, autoremediated or forwarded to humans or incident response systems for investigation, triage, response and applicable remediation."
We feel this description closely aligns with how F5 approaches AI security at runtime. Through F5 AI Guardrails, F5 focuses on inspecting and enforcing policy during live AI interactions—across models, applications, and agent workflows—helping organizations apply governance controls where AI systems actually operate.*
Forrester: Defining AI red teaming and guardrails in a continuous assurance model
Various Forrester research examines AI security from an operational perspective, focusing on systems that behave autonomously and evolve over time. Rather than emphasizing point-in-time assessments, it is our take that Forrester’s work explores how organizations maintain oversight and assurance as AI systems operate in dynamic environments.
In its research Use AI Red Teaming To Evaluate The Security Posture Of AI-Enabled Applications,[i] Forrester defines AI red team assessments as follows:
"An AI red team assessment blends traditional cybersecurity red team concepts alongside safety, toxicity, and harm specific to artificial intelligence. AI red team engagements should consist of a combination of human participants performing cybersecurity tests of the tech stack supporting AI — including infrastructure, applications, APIs, modifications to AI such as retrieval-augmented generation (RAG), and an underlying large language model — along with automated testing using AI to assess prompt security and safety."
This definition reflects the type of AI-specific adversarial testing delivered by F5 AI Red Team. In framing red team engagements this way, we find that this research emphasizes the need for effective testing to account for how AI is embedded and operated within real systems, not evaluated in isolation.
In parallel, Introducing Forrester’s AEGIS Framework: Agentic AI Enterprise Guardrails For Information Security,[ii] states that “establishing enterprise guardrails is critical for secure deployment.“ F5 supports this, enabling organizations to leverage AI Guardrails as mechanisms applied during system operation to support oversight and policy enforcement, particularly where AI systems interact directly with users, sensitive data, and connected applications.
Across these research areas, our takeaway is clear: Forrester presents guardrails and red teaming as distinct but related practices that may be used within a continuous assurance approach to AI security, underscoring ongoing oversight rather than one-time validation.**
KuppingerCole: Recognizing generative AI defense as a distinct market category
In its Generative AI Defense Leadership Compass, KuppingerCole examines generative AI defense as an emerging discipline with requirements that extend beyond traditional application and data security controls. It does so, by evaluating how organizations are addressing risks introduced by large language models and AI-driven applications in production environments.
A core recommendation of the report is the need for security controls that operate where AI systems are actually used in order to enable real-time protection, policy enforcement, and auditability across diverse deployment models. Rather than focusing solely on model development or provider-level safeguards, this analysis reflects growing attention on securing AI interactions once systems are deployed and connected to users, applications, and sensitive data.
Given this Leadership Compass serves as a market guide to help organizations find the best solutions, upon evaluating our AI Guardrails and AI Red Team products, KuppingerCole positioned F5 as a leader across its Product, Innovation, and Market categories. As is defined in the report, this positioning showcases F5’s “cutting-edge products, not only in response to customers’ requests but also because they are driving technical changes in the market by anticipating what will be needed in the months and years ahead.”***
Translating analyst perspectives into practice
Although Gartner, Forrester, and KuppingerCole approach AI security from different analytical approaches, their research surfaces practical challenges enterprises are actively working to solve, including:
- Enforcing policies consistently during AI interactions
- Understanding real-world AI exposure before incidents occur
- Maintaining oversight as AI systems evolve and scale
This is where F5 AI Guardrails and F5 AI Red Team fit into the broader landscape. AI Guardrails provide runtime enforcement to help manage AI behavior in production, while AI Red Team enables adversarial testing designed to reflect real-world usage and attack patterns.
(Note: This alignment should not be interpreted as analyst endorsement. Rather, it reflects how F5’s AI runtime security capabilities map to control types and operating models described across analyst research.)
Securing and optimizing AI investments
As AI systems become more autonomous and business-critical, the risks associated with inadequate security extend beyond breaches to include compliance exposure, reputational harm, and stalled innovation. Analyst frameworks—from Gartner TRiSM to Forrester’s work on AEGIS and KuppingerCole’s generative AI defense research—illustrate why runtime guardrails and AI red teaming are increasingly central to enterprise AI security strategies.
Put simply, F5’s AI runtime security solutions help organizations apply these insights in practice, supporting secure AI adoption while preserving the flexibility needed to innovate.
To learn more, check out our F5 AI Guardrails and F5 AI Red Team webpages.
Analyst disclaimers
*Source: Gartner Report, Market Guide for AI Trust, Risk and Security Management, By Avivah Litan, Max Goss, etc., February 2025. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
**Forrester Research does not endorse any company, product, or service. This discussion reflects an interpretation of publicly available Forrester research and is provided for informational purposes only.
***KuppingerCole does not endorse any vendor, product, or service. This discussion reflects licensed KuppingerCole research and is provided for informational purposes only.
[i] Use AI Red Teaming To Evaluate The Security Posture of AI-Enabled Applications, Forrester Research Inc., Sept. 29, 2025
[ii] Introducing Forrester’s AEGIS Framework: Agentic AI Enterprise Guardrails for Information Security, Forrester Research, Inc. Aug. 3, 2025
About the Author

Related Blog Posts

AI security through the analyst lens: insights from Gartner®, Forrester, and KuppingerCole
Enterprises are discovering that securing AI requires purpose-built solutions.

Securing the public sector against Shadow AI with F5 BIG-IP SSL Orchestrator
Learn how state, local, and education organizations can enhance visibility and security in encrypted network traffic while addressing compliance and governance.

F5 secures today’s modern and AI applications
The F5 Application Delivery and Security Platform (ADSP) combines security with flexibility to deliver and protect any app and API and now any AI model or agent anywhere. F5 ADSP provides robust WAAP protection to defend against application-level threats, while F5 AI Guardrails secures AI interactions by enforcing controls against model and agent specific risks.

Govern your AI present and anticipate your AI future
Learn from our field CISO, Chuck Herrin, how to prepare for the new challenge of securing AI models and agents.

New 7.0 release of F5 Distributed Cloud Services accelerates F5 ADSP adoption
Our recent 7.0 release is both a major step and strategic milestone in our journey to deliver the connectivity, security, and observability fabric that our customers need.

F5 provides enhanced protections against React vulnerabilities
Developers and organizations using React in their applications should immediately evaluate their systems as exploitation of this vulnerability could lead to compromise of affected systems.
