Agent skills: An emerging open standard

Industry Trends | April 06, 2026

As AI agents mature from copilots into autonomous actors, the industry is converging on a shared abstraction: agent skills.

Agent skills formalize what an agent knows how to do. They package instructions, expected behaviors, and declared tool usage into portable artifacts that can be shared, versioned, and reused across agents and platforms. Conceptually, they serve the same role plugins once did for IDEs or extensions for browsers.

What gives this model real weight is that it is no longer experimental or proprietary. Anthropic has published an open Agent Skills standard, positioning skills as a portable, interoperable layer rather than a framework-specific construct. Much like MCP standardized how agents talk to tools, Agent Skills are standardizing how agents acquire capability.

That matters. Open standards have gravity. They attract ecosystems, tooling, registries, and eventually enterprise adoption. When vendors align on a common abstraction, it stops being a curiosity and starts being infrastructure. The launch in December 2025 included partner skills from a variety of vendors (Atlassian, Figma, Canva, Stripe, Notion, and Zapier) demonstrating the standard already has momentum.

Which means the security implications are no longer hypothetical.

Why agent skills change the security model

Agent skills are intentionally lightweight. They are designed to be human-readable, easy to author, and simple to distribute. Most implementations rely on Markdown for behavior and YAML for metadata, because frictionless sharing is the goal.

From a developer productivity standpoint, that’s a win.

From a security standpoint, it creates a familiar but dangerous pattern: executable intent delivered as content.

Skills are loaded at runtime. They operate inside the agent’s reasoning loop. They can influence planning, tool selection, and execution order. They are often fetched dynamically and composed transitively. In other words, they behave less like configuration files and more like supply-chain inputs.

Crucially, skills do not enforce anything. They declare what an agent would like to do. They do not constrain what the agent can actually do.

That distinction is everything.

Once a skill is active, the traditional “shift left” controls developers rely on become largely irrelevant. There is no pull request for an agent’s internal plan. There is no human code review before a tool call fires. There is no pause between reasoning and execution.

Development, for agents, happens at runtime.

The enforcement gap skills don’t fill

It’s tempting to treat agent skills as a security surface. After all, they list required tools and expected behavior. It feels like a natural place to insert guardrails.

But skills live in the wrong place in the stack.

They sit inside the agent. They are interpreted by the same model that is optimizing for completion and progress. Asking a skill to enforce security is equivalent to asking the agent to restrain itself. That has never worked reliably, and it never will.

More importantly, skills have no authority. They cannot revoke access, throttle execution, inspect side effects, or prevent data exfiltration. At best, they provide hints. At worst, they provide a false sense of control.

Security cannot live at the level of intent description. It has to live at the level of capability execution. In more familiar terms, this is access control and identity management applied to autonomous systems. Our customers recognize this as a significant challenge related to agentic AI.

Identity and access control tops list of expected challenges for agentic AI.
Identity and access control tops list of expected challenges for agentic AI in F5's State of Application Strategy survey.

Why the tool layer is the only real enforceable boundary

Agents don’t cause harm by thinking incorrectly. They cause harm when thought turns into action at the tool boundary.

Every meaningful agent action from calling an API to writing code, from deploying infrastructure to querying a database or sending data to another system, crosses a tool interface. That interface is where autonomy meets reality. It is also the last place an agent cannot bypass if the system is designed correctly.

This is why a “tool firewall” emerges as the best control plane for agent security.

A tool firewall mediates every tool invocation. It sits between agents and the systems they touch, enforcing policy before execution rather than auditing after damage is done. Unlike prompt-level controls or skill metadata, it operates outside the agent’s reasoning loop and is therefore not subject to model compliance.

This is the “MCP security” meets “AI guardrails” layer. MCP defines how agents talk to tools. The tool firewall decides whether they’re allowed to and under what constraints. If tools are what allow agents to learn “new tricks,” then the tool firewall is the control that prevents those tricks from burning down the house.

This is not a new idea conceptually. It is the same pattern that underpins API gateways, service meshes, and zero trust architectures. What’s new is the actor on the other side of the interface.

In practice, a tool firewall functions as a broker. Agents never hold direct credentials to external systems. They request actions. The firewall decides whether those actions are allowed, under what conditions, and with what constraints.

A tool firewall evaluates identity context (which agent, which skill, which environment), inspects parameters and payloads, enforces policy-as-code, and issues short-lived, least-privilege credentials only when appropriate. It observes outcomes, records provenance, and provides a verifiable audit trail for autonomous behavior.

This turns skills into what they should be: requests for capability, not grants of authority.

What’s left is for the market to decide what the form of a tool firewall will take. But its emergence as part of the security toolbox for agent architectures is inevitable.

Share

About the Author

Lori Mac Vittie
Lori Mac VittieDistinguished Engineer and Chief Evangelist | F5

More blogs by Lori Mac Vittie

Related Blog Posts

From packets to prompts: Inference adds a new layer to the stack
Industry Trends | 02/18/2026

From packets to prompts: Inference adds a new layer to the stack

Inference is not training. It is not experimentation. It is not a data science exercise. Inference is production runtime behavior, and it behaves like an application tier.

Compression isn’t about speed anymore, it’s about the cost of thinking
Industry Trends | 02/02/2026

Compression isn’t about speed anymore, it’s about the cost of thinking

In the AI era, compression reduces the cost of thinking—not just bandwidth. Learn how prompt, output, and model compression control expenses in AI inference.

The efficiency trap: tokens, TOON, and the real availability question
Industry Trends | 01/07/2026

The efficiency trap: tokens, TOON, and the real availability question

Token efficiency in AI is trending, but at what cost? Explore the balance of performance, reliability, and correctness in formats like TOON and natural-language templates.

Programmability is the only way control survives at AI scale
Industry Trends | 12/09/2025

Programmability is the only way control survives at AI scale

Learn why data and control plane programmability are crucial for scaling and securing AI-driven systems, ensuring real-time control in a fast-changing environment.

The top five tech trends to watch in 2026
Industry Trends | 12/03/2025

The top five tech trends to watch in 2026

Explore the top tech trends of 2026, where inference dominates AI, from cost centers and edge deployment to governance, IaaS, and agentic AI interaction loops.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
Agent skills: An emerging open standard | F5