Application delivery isn’t a nice-to-have in the era of AI and automation. It’s absolutely essential as load balancers, API gateways, and service meshes become the last line of defense when autonomous systems go off script.
Earlier this week, I lost a controller.
More specifically, I lost the Apex controller for my secondary system. It’s the automation brain responsible for handling part of my reef tank’s infrastructure, including two pumps that deliver fresh saltwater to the main system during water changes.
These pumps were configured with fallback values set to ON, a default safety measure in the event of system silence. So when the controller failed, they turned on and kept running. Unchecked. Uncoordinated. Unobserved.
The result? A near overflow of the main tank and a very panicked Lori trying to contain it. And a reminder that automation without boundaries isn’t smart. It’s risky.
And that reminder has everything to do with how we’re building modern IT architectures, especially as AI begins to take the wheel.
What happened in the tank wasn’t due to a bug or hardware issue. The pumps did exactly what they were told. The problem was that when the controller failed, they kept going. There was no circuit breaker. No external watchdog. No smart fallback strategy.
This is exactly what can happen in an AI-driven IT system.
Imagine AI-generated policy updates being applied to live infrastructure. Or real-time traffic steering decisions based on a model that’s quietly failing. Or a deployment pipeline that keeps pushing changes because the controller never explicitly said “stop.”
In those moments, silence doesn’t mean approval. It means danger.
In most enterprise architectures, we rely on load balancers, API gateways, and service meshes to act as our version of circuit breakers. They monitor for health, detect latency, apply retry logic, and cut off unhealthy systems before they bring down everything else.
That’s not just operational hygiene; it’s how we contain failure and preserve trust in autonomous systems.
When the control plane goes dark, the load balancer’s job is to stop traffic, not assume everything’s fine.
Of course, not all systems should shut down in a failure. In my reef tank, the display tank circulation pumps are also automated but those are deliberately configured to fail ON.
Because without flow, oxygen levels drop and fish die.
This is the nuance that matters. Some systems must keep running no matter what, such as DNS, identity, and base telemetry. But process automation, like policy generation or staged deployments? Those need to fail OFF if the controller disappears.
And that’s where AI agents introduce real tension: they don’t always know when they’re wrong. That’s why the architecture around them must.
Results from the 2025 F5 State of Application Strategy Report confirm what most of us are already seeing in the field: AI is moving from suggestion to action.
That tells us something important: we’re entering the age of autonomous digital agents, and organizations are not backing away. They’re accelerating.
But AI that can write or modify configs without human review? That’s like leaving the pumps running with no oversight. It’s only a matter of time before something floods or breaks.
AI is not optional. Neither are circuit breakers.
As we invite more intelligent agents into our systems—into traffic flows, configuration engines, and policy decisions—we need to equip the infrastructure beneath them with clear, automatic limits.
Load balancers. Gateways. Health checks. Retry logic. These aren’t just patterns; they’re guardrails for safe autonomy. That’s one of the driving reasons behind the development of the Application Delivery Top 10, to put a spotlight on the importance of application delivery to operating and securing a digital estate. Because it’s only going to become more important as automation overtakes manual operations in the role of controller of digital business.
Automation without discipline isn’t resilience.
It’s just chaos with a cron job.