Classification and the evolution of application delivery

F5 Ecosystem | August 18, 2025

If you’ve followed anything I’ve written in the past, oh, nearly 20 years, you already know that application architecture has profound impacts on application delivery. It’s not just about how apps are built; it’s about how they behave, how they scale, how they fail, and most importantly, how they move through the infrastructure we’ve built to deliver them. And agent architectures are definitely shaking that all up.

The evolution this time is going to require more than new protocol support or new security capabilities. It’s going to require classification.

Now the industry has been classifying traffic for years, but mostly to stop it. We’ve all used classification to block bots, filter out threats, and slap WAF policies on malicious payloads. That’s table stakes. And it’s where classification has largely been relegated: security.

But AI is changing traffic, and the resources needed to process it. And agent architectures? They don’t just change the rules of the game; they change the game itself.

AI adoption is moving faster than most technology in the past, and agents are already entering enterprises.
AI adoption is moving faster than most technology in the past, and agents are already entering enterprises.

Every POST might be a request, or a goal, or an AI agent trying to reason its way through your stack. It might kick off an orchestration, trigger a recursive workflow, or blow up your backend with a 200-token prompt that expands into 20,000. And the infrastructure? It still thinks all POSTs are the same.

That’s the problem.

Modern traffic isn’t just traffic; it’s a task, a job, a decision. And your ability to classify it before you route it determines whether your app stays responsive, available, and, ultimately, secure.

1. Connection-based routing (legacy)

Let’s begin with the familiar. The legacy model of traffic management, rooted in Layer 4 (L4) routing, still underpins much of the Internet. This approach relies on basic parameters:

  • L4 routing: Decisions based on transport layer details, such as IP addresses and port numbers.
  • Path-based routing: Limited inspection of URL paths, typically at the application layer (Layer 7).
  • Homogeneous pools: Backend servers treated as identical, with no differentiation based on specialized capabilities.
  • Load balancing: Distributing connections evenly across servers to prevent overload.

The problem? This model is largely blind to the meaning of a request. It sees traffic as uniform packets, not as varied tasks with different needs. In today’s world of generative AI, embedded large language models (LLMs), and agent-based frameworks, this blindness is a growing risk because not all traffic is equal anymore.

Consider two HTTP POST requests:

  • One might be a simple CRUD (Create, Read, Update, Delete) operation, like updating a user profile.
  • Another could trigger a complex multi-agent workflow, such as an AI-driven reasoning loop that makes five downstream tool calls, writes to a database, and initiates a long-running text summarization task.

If your infrastructure treats these requests identically and uses the same routing rules, timeouts, or backend servers, then you’re courting trouble. Bottlenecks, failures, or degraded performance can result when a resource-intensive AI task is handled like a lightweight database query. Modern traffic demands smarter management.

2. Classification-based routing (now/transitional)

We’ve been doing L7 routing for years. Matching on paths, inspecting payloads, peeking into headers to decide which pool to send traffic to. That’s content-based routing. It’s useful. But it only tells you what the request contains, not why it exists.

What we need to go forward is context-based classification.

  • Operates at L7+, but goes beyond payload inspection.
  • Traffic is classified based on intent, complexity, and cost.
    • Is this a summarization task or a user login?
    • Is it recursive, AI-generated, or latency-sensitive?
  • Pools are differentiated by capability, not just capacity.
  • Routing decisions factor in purpose, not just format.

This is how we shift from treating traffic as “content to be served” to “work to be processed.” It enables real-time decisions about where a request should go, how it should be prioritized, and what policies should apply based on what the request is trying to achieve. Call it intent-based. Call it context-aware. Doesn’t really matter what you call it because the shift is real: requests aren’t just transactions anymore. They’re becoming invocations. Triggers. Goals wrapped in HTTP.

This all maps directly to Application Delivery Top 10 #4 (Traffic Controls), #5 (Traffic Steering), and #6 (Latency Management). Because you can’t control, steer, or optimize what you can’t contextualize.

3. Task-based routing (emerging)

This is where we’re going, especially as agentic systems and recursive planners become part of the production landscape. We’re not just serving content anymore. We’re managing goals. And that changes everything.

In this model, a single request doesn’t always map to a single response. Instead, it may kick off a sequence of smaller tasks. Some of those tasks might run in parallel. Others might depend on something else finishing first. You’re not just routing traffic; you’re orchestrating work.

Think of it more like a workflow than a pipeline. You’re managing a checklist of steps that all contribute to an outcome. Some steps can happen right away; others need to wait their turn. But everything needs to be coordinated, observed, and steered based on capability, policy, and current load.

At that point, traditional routing logic starts to fall apart. Static paths and uniform policies won’t cut it when every request could unfold into a multi-step, multi-agent process. You need a system that understands what the task is, what it needs, and who or what is best suited to handle it, not just where to send the packet.

And here’s the kicker: you can’t do any of that without classification. If your infrastructure doesn’t understand what the request is trying to accomplish, it can’t route it intelligently. This isn’t the distant future, it’s the emerging reality. And classification is the bridge that gets us from request routing to real coordination.

Why this matters

Let’s be clear: this isn’t about adding more metadata or building a fancier routing table. It’s about shifting how we think about the traffic we’re already seeing.

Classification is how we move from reacting to requests to understanding them before we route, scale, or break something by treating a recursive agent loop the same way we treat a static asset fetch.

It’s not just for security anymore. It’s not just about what’s in the request. It’s about what the request means, what it wants to do, how heavy it is, and what it touches downstream.

And that’s why classification isn’t some emerging trend. It’s a requirement. A necessary step in evolving traffic management to handle the world we’re building: one where agents, tasks, workflows, and real-time orchestration aren’t edge cases, they’re widely deployed.

You can’t manage what you don’t understand. Classification is how you start understanding.

Share

About the Author

Lori Mac Vittie
Lori Mac VittieDistinguished Engineer and Chief Evangelist

More blogs by Lori Mac Vittie

Related Blog Posts

At the Intersection of Operational Data and Generative AI
F5 Ecosystem | 10/22/2024

At the Intersection of Operational Data and Generative AI

Help your organization understand the impact of generative AI (GenAI) on its operational data practices, and learn how to better align GenAI technology adoption timelines with existing budgets, practices, and cultures.

Using AI for IT Automation Security
F5 Ecosystem | 12/19/2022

Using AI for IT Automation Security

Learn how artificial intelligence and machine learning aid in mitigating cybersecurity threats to your IT automation processes.

The Commodification of Cloud
F5 Ecosystem | 07/19/2022

The Commodification of Cloud

Public cloud is no longer the bright new shiny toy, but it paved the way for XaaS, Edge, and a new cycle of innovation.

Most Exciting Tech Trend in 2022: IT/OT Convergence
F5 Ecosystem | 02/24/2022

Most Exciting Tech Trend in 2022: IT/OT Convergence

The line between operation and digital systems continues to blur as homes and businesses increase their reliance on connected devices, accelerating the convergence of IT and OT. While this trend of integration brings excitement, it also presents its own challenges and concerns to be considered.

Adaptive Applications are Data-Driven
F5 Ecosystem | 10/05/2020

Adaptive Applications are Data-Driven

There's a big difference between knowing something's wrong and knowing what to do about it. Only after monitoring the right elements can we discern the health of a user experience, deriving from the analysis of those measurements the relationships and patterns that can be inferred. Ultimately, the automation that will give rise to truly adaptive applications is based on measurements and our understanding of them.

Inserting App Services into Shifting App Architectures
F5 Ecosystem | 12/23/2019

Inserting App Services into Shifting App Architectures

Application architectures have evolved several times since the early days of computing, and it is no longer optimal to rely solely on a single, known data path to insert application services. Furthermore, because many of the emerging data paths are not as suitable for a proxy-based platform, we must look to the other potential points of insertion possible to scale and secure modern applications.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us