Innovation is one thing. Governance is another. A concerning mix of statistics from research surveys confirms that generative AI adoption by corporations is moving full steam ahead despite governance gaps that should cause pause.
According to PWC’s AI Agent Survey from May 2025, 79% of companies have adopted AI and only 2% are not considering agentic AI at all. Regardless of whether those agents display truly agentic characteristics—such as the ability to generate work plans and execute multi-step actions, evaluate results, and self-adjust—or if they are chat bots funneling one-off requests out to a selected large language model (LLM), adoption is very high, and nearly all of those companies are planning to increase spending on AI in the coming year.
Compare with our own research—which uses an AI Readiness Index to quantify operational capacity to successfully scale, secure, and sustain AI systems—only 2% of organizations are highly ready to tackle the challenges inherent in AI-enabled system design, deployment, and operations.
Still, IT and security teams have no choice based on EY research, which shows a majority of business leaders believe they must adopt agentic AI this year in order to get ahead of their competitors by this time next year. Organizations who slow down enough to thoughtfully close the security and governance gaps will find themselves further along with a less vulnerable business in 2026 than those who ignore them. The following three practices comprise a triumvirate that will help de-risk the plunge into generative and agentic AI:
What’s new needs to be secured, and every form of generative AI has one or more LLMs at its core. Trusting the model creators to continually improve accuracy, reduce hallucinations, and prevent jailbreaking is not sufficient. Businesses must invest in prompt and model services, which can independently detect unwanted behaviors and stop them. Moreover, since every business using an LLM is actually using more than one, obfuscating applications from the inferencing API call is mandatory in order to provide for availability, routing, scaling, and cost control requirements of those applications.
The enterprise data exposed to models may or may not be new data to the business. The fact remains it must be secured, and it is not merely securing data where it rests or encrypting it when it is moving across the network. Data existing in the private enterprise environment in any way, shape, or form, even to be used with an approved third-party service or by a third-party entity, must also be detected and protected. Do not misunderstand. Provenance is not the focus. Exit is the focus.
Agents change the game because they use LLMs to decide what actions to take to reach an end goal which they, themselves, decide. To operate, they need permission to do things, access resources, create, change, and delete information. They must be monitored and controlled by something outside the agent but close enough to it to effectively observe and evaluate it. Two primary approaches have emerged that deserve careful watch as they mature in the coming months: guardrail frameworks (such as MCP-Universe) and LLM-as-a-Judge frameworks (such as Microsoft LLM-as-a-Judge Framework).
The former defines ground truth using very specific task-based operations to compare the results of agent-initiated actions with separate actions pre-fetched by explicitly directed software. Its strength is the ever-growing domain of sample code to check for various facts like weather or historical facts using pre-selected and known-good sources. It gathers the information then compares those results as ground truth against what a deployed agent comes up with.
The latter uses a different LLM, even multiple LLMs, to analyze the behavior of a deployed agent and evaluate the quality and propriety of its results as defined by the business. Both show promise, both are maturing rapidly, and both are under the control of the business. Even better, both can be reinforced with human-in-the-loop controls, as needed.
This triumvirate of controls covering models, data, and agents closes the gaps in cybersecurity and governance, which would otherwise expose a business to the new types of risk associated with generative AI and agentic systems.
To learn more about the use of these types of controls as independent infrastructure services, read F5’s recent announcement about CalypsoAI, the pioneer in defense, red-team, and governance solution for AI apps and agents.