The New AI Risk Surface: Invisible, Autonomous, and Already Inside

F5 ADSP | July 10, 2025
James White, VP, Engineering, F5

At the halfway point of 2025, it’s timely to take a snapshot of enterprise AI adoption and the state of AI security.

Enterprise GenAI Usage is Now Standard

Across the board, enterprises have at least a base level understanding of GenAI. They understand what it is, and where it can be put to use. If you’re in a job that doesn’t have access to GenAI, you’re behind the curve right now.

Building applications on top of GenAI is quickly becoming standard within enterprises. First-hand, we see our customers deploying AI apps to production, both internally and externally.

When it comes to agent adoption, there are two strands: adoption through third-parties and building proprietary agents. Agent adoption through third parties is quite well advanced, but companies don’t always know they’ve done it. Organisations that started with a GenAI model interface in a browser may be unaware that they’re also using agentic AI.

For example, when Google Gemini launched, it didn’t have any agentic elements built in; now Deep Research is an agent within Gemini. When Deep Research is given a prompt, it figures out what tasks to do, which web searches to do, brings back information and creates reports. Likewise, there are agents within Microsoft 365 Copilot.

Building proprietary agents is mainly at the prototyping and experimentation stage, but agents are moving much faster than GenAI did. In 2024, the questions were: Is this going to happen? Is there really a difference in risk? Now those questions are gone. Enterprises have no doubts.

That’s why understanding agents and agentic resistance is so important.

MCP is Not Shorthand for Security

Many enterprises have begun to think of Model Context Protocol (MCP) as the way of introducing agentic AI. Organizations are almost replacing the word ‘agent’ with MCP, they talk about ‘building an MCP’.

That’s not strictly accurate. MCP effectively represents the menu of options that an agent can use to do things. Think of it as the MCP server saying ‘here’s what’s on the menu’, and the MCP client decides ‘I’m going to use that one in this way’. It then uses an AI model to figure out what to do and how.

In that structure, risk still stems from the model. Other risks sit within the server software. Those risks aren’t automatically removed by MCP. The protocol is very good but it is by no means secure by default, as we saw in recent security incidents. For example, Asana recently identified and disclosed a bug that could have exposed data belonging to Asana MCP users to users in other accounts.

By assuming that MCP minimizes risk, enterprises may actually increase their risk exposure. A lot of our customers and prospects are now asking, ‘Tell us how you secure MCP’. They need to take a holistic view that considers and stress-tests the use case, the model, deployment patterns, and interactions with tools via MCP and other standards.

With Agentic AI, the ‘How’ is Crucial

Most companies using AI over-index on what an AI model or application achieves. They are looking at the work product, the output. With agents working autonomously, the risk levels are much higher, and the how becomes incredibly important.

That’s because there are many ways to achieve a what but a lot of those ways are appropriate or incorrect. Imagine an agent is tasked with keeping a database up to date, and has access and permissions to delete or insert data. It could delete entries relating to F5, for example, by accurately finding and removing the exact matches of the company name.

However, the agent could equally decide to issue an instruction to ‘delete F*’ records, deleting records of all companies beginning with F. This crude action would achieve the same goal, but with a cascade of unintended consequences that may not be easily remediated. In the age of agents, when actions are driven by non-deterministic models, unintentional behavior is the breach – especially if safeguards are inadequate. Understanding agent behavior is absolutely critical to their successful adoption.

The Enterprise Security Stack is Struggling

Securing AI represents a whole new way of operating, not just some slight changes to existing security solutions. All of the traditional software that companies use will, over time, start morphing into other manifestations. And how you protect those is not completely clear right now.

For organizations that have a traditional stack of security, there is already evidence that it’s creaking at the seams. That’s because they were designed and built for a pre-AI era. In the AI era, the volume of net new information being generated—ranging from garbage to high-quality, mission-critical data—is exploding.

The systems that were looking at human-generated traffic for the past several decades are now coming to terms with humans-using-AI generated traffic. That already represents a huge multiplier. Now imagine when it is agents-generating-information traffic. It just goes exponential.

The traditional security stack is not built for that reality; it will have to change very dramatically to keep up—and sooner rather than later.

A Glimpse at the Future: Atomic Agents

Today, an agent requires an AI model to act as its brain. That brain usually sits centrally somewhere and the agent framework asks questions of the model and the answers come back. That’s the cycle, over and back between the agent and the model.

If an agent goes rogue in that scenario, an organisation can apply a simple networking rule to block the agent from access to the model. In the worst-case scenario, the model can be turned off.

In future, models will get small enough that they don’t require GPUs and won’t have to sit in server rooms or in the cloud. The ‘brain’ will run in the same place as the agent, so the agent can efficiently operate anywhere and scale up and down. When that happens, security risk explodes.

The big concept here is atomic agents, and there is evidence they are coming. Microsoft’s BitNet b1.58 2B4T, a 1-bit model that runs on CPU, shows what’s possible. It’s not small enough, not fast enough, and probably not good enough – yet. But ‘yet’ becomes reality very quickly in the AI era.

Share

Related Blog Posts

Securing the public sector against Shadow AI with F5 BIG-IP SSL Orchestrator
F5 ADSP | 01/07/2026

Securing the public sector against Shadow AI with F5 BIG-IP SSL Orchestrator

Learn how state, local, and education organizations can enhance visibility and security in encrypted network traffic while addressing compliance and governance.

F5 secures today’s modern and AI applications
F5 ADSP | 12/22/2025

F5 secures today’s modern and AI applications

The F5 Application Delivery and Security Platform (ADSP) combines security with flexibility to deliver and protect any app and API and now any AI model or agent anywhere. F5 ADSP provides robust WAAP protection to defend against application-level threats, while F5 AI Guardrails secures AI interactions by enforcing controls against model and agent specific risks.

Govern your AI present and anticipate your AI future
F5 ADSP | 12/18/2025

Govern your AI present and anticipate your AI future

Learn from our field CISO, Chuck Herrin, how to prepare for the new challenge of securing AI models and agents.

New 7.0 release of F5 Distributed Cloud Services accelerates F5 ADSP adoption
F5 ADSP | 12/10/2025

New 7.0 release of F5 Distributed Cloud Services accelerates F5 ADSP adoption

Our recent 7.0 release is both a major step and strategic milestone in our journey to deliver the connectivity, security, and observability fabric that our customers need.

Stay ahead of API security risks with our latest F5 Distributed Cloud Services release
F5 ADSP | 12/10/2025

Stay ahead of API security risks with our latest F5 Distributed Cloud Services release

This release brings exciting, new API discovery options, expanded testing scenarios, and enhanced detection capabilities—all geared toward reducing API security risks while improving overall visibility and compliance.

F5 provides enhanced protections against React vulnerabilities
F5 ADSP | 12/04/2025

F5 provides enhanced protections against React vulnerabilities

Developers and organizations using React in their applications should immediately evaluate their systems as exploitation of this vulnerability could lead to compromise of affected systems.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
The New AI Risk Surface: Invisible, Autonomous, and Already Inside - CalypsoAI | F5