When Complexity Becomes the Enemy: How Operational Workflow Sprawl Threatens Performance and Delivery

F5 Ecosystem | July 30, 2025

In the evolving world of application delivery, organizations are investing heavily in modern infrastructure, applications, and intelligent automation. Yet despite this investment, many still struggle with issues that should have been solved by now: systems that fail under pressure, deployment policies that don’t hold across environments, and workflows that grind to a halt when speed is most needed.

Two of the top ten delivery challenges organizations face today are particularly telling: lack of fault tolerance and resilience, and incompatible delivery policies. On the surface, these may sound like infrastructure or tooling problems but when you dig deeper, it becomes clear they are symptoms of a deeper structural flaw: operational workflow complexity.

This complexity isn’t just an annoyance. It actively degrades runtime performance, undermines policy consistency, and prevents organizations from realizing the full potential of their digital transformation investments.

Complexity is the top operational obstacle

Workflow complexity
In our latest research, 54.8% of respondents reported that their biggest challenge in designing operational workflows for application delivery and security was "too many tasks in the process." That’s more than half of IT decision-makers and implementers directly acknowledging that their systems are too complex to operate efficiently.

And that’s not the only signal. Nearly as many, 53.6%, said "too many different APIs" are involved in their workflows. Another 45.3% cited "too many different languages needed" as a top challenge. This is fragmentation at the operational layer: different tools, different syntaxes, different owners. All of it adds overhead, and all of it creates risk.

These numbers should concern anyone who cares about uptime, user experience, or operational velocity.

How complexity breaks resilience

Let’s start with ADC02 Lack of Fault Tolerance and Resilience. In a typical modern application stack, you might have half a dozen different layers responsible for routing, load balancing, service discovery, authentication, telemetry, and policy enforcement. Each of these layers could be owned by a different team. Each might have its own API, its own change control process, and its own language.

What happens when something goes wrong?

A failover doesn’t trigger because the upstream dependency didn’t recognize the node failure. A new deployment misroutes traffic because the service mesh was out of sync with the load balancer. A latency spike takes minutes to isolate because the observability tools weren’t configured consistently across tiers.

These are not rare edge cases. They are day-to-day operational failures caused by workflow sprawl. And they directly tie back to the data in which we find 29% of respondents still rely on custom scripts to support automation. That’s a huge red flag. Scripting around complexity is not automation, it’s fragile glue. It breaks when the environment changes, and it delays recovery when performance degrades.

When operational workflows are bloated with handoffs, tribal knowledge, and manual remediation, there is no true fault tolerance. Just hope.

When policies don’t travel with the app

When most people hear “policy drift,” they think security. But drift in application delivery policies—things like traffic routing rules, load balancing behavior, or rate-limiting configurations—can be just as damaging. That’s one of the causes of ADC07 Incompatible Delivery Policies.

In a well-orchestrated pipeline, policies should follow the application from dev to staging to production. But in practice, the handoffs between environments are often manual, inconsistent, and tool dependent. A load balancing rule defined in staging may not match what’s configured in the public cloud. A routing policy tested in development might be omitted from production due to environment-specific constraints or oversight. Canary release thresholds, caching behaviors, failover logic—all of these are delivery-layer policies that frequently drift during deployment.

The root cause is the same one echoed in the F5 2025 State of Application Strategy Report: complexity. With 45.3% of respondents citing “too many different languages needed” and 53.6% citing “too many different APIs,” it’s clear that delivery workflows are fragmented across tooling ecosystems. Each environment might use different configuration models or infrastructure-as-code platforms, requiring translation every step of the way.

Translation introduces risk. It also introduces delay. When delivery policies must be manually rewritten or adapted between systems, rollout consistency suffers. And in distributed systems, inconsistency is often worse than failure—because it creates unpredictable behavior.

Imagine a multi-region deployment where a traffic-steering policy only applies in one zone. Or a global application with inconsistent cache rules across edge nodes. The result is a degraded user experience that’s hard to trace and harder to fix.

To move fast without breaking things, organizations need delivery policies that are declarative, portable, and enforced consistently across every tier of the stack. That’s impossible when workflows rely on a patchwork of manual processes and team-specific logic.

Until delivery policies are treated as first-class citizens—that means on equal footing with app code and infrastructure config—organizations will continue to struggle with drift, downtime, and delivery delays. Simplifying those workflows is the first and most necessary step.

Streamlining isn’t optional anymore

Reducing workflow complexity isn’t about cleaning up the architecture diagram or making ops teams happier (though it would do both). It’s about delivering on the core promises of modern application infrastructure: speed, resilience, and consistency.

Organizations that want to improve runtime performance and enforce predictable delivery behavior must take a hard look at how many tools, teams, and handoffs are involved in their pipelines. And more importantly, they must ask: how many of these steps are adding value, and how many are just there to work around limitations in the system?

The answer isn’t always a new tool. Sometimes it’s fewer tools. Sometimes it’s a single, unified platform that enforces delivery logic with the same syntax and behavior across every environment. Sometimes it’s automating not just deployment, but governance, so delivery policies apply as code, not as a checklist.

The path to resilient and reliable delivery runs through simplification. Until we treat complexity as a critical risk, it will continue to erode everything we’ve built on top of it.

Share

About the Author

Lori Mac Vittie
Lori Mac VittieDistinguished Engineer and Chief Evangelist

More blogs by Lori Mac Vittie

Related Blog Posts

At the Intersection of Operational Data and Generative AI
F5 Ecosystem | 10/22/2024

At the Intersection of Operational Data and Generative AI

Help your organization understand the impact of generative AI (GenAI) on its operational data practices, and learn how to better align GenAI technology adoption timelines with existing budgets, practices, and cultures.

Using AI for IT Automation Security
F5 Ecosystem | 12/19/2022

Using AI for IT Automation Security

Learn how artificial intelligence and machine learning aid in mitigating cybersecurity threats to your IT automation processes.

The Commodification of Cloud
F5 Ecosystem | 07/19/2022

The Commodification of Cloud

Public cloud is no longer the bright new shiny toy, but it paved the way for XaaS, Edge, and a new cycle of innovation.

Most Exciting Tech Trend in 2022: IT/OT Convergence
F5 Ecosystem | 02/24/2022

Most Exciting Tech Trend in 2022: IT/OT Convergence

The line between operation and digital systems continues to blur as homes and businesses increase their reliance on connected devices, accelerating the convergence of IT and OT. While this trend of integration brings excitement, it also presents its own challenges and concerns to be considered.

Adaptive Applications are Data-Driven
F5 Ecosystem | 10/05/2020

Adaptive Applications are Data-Driven

There's a big difference between knowing something's wrong and knowing what to do about it. Only after monitoring the right elements can we discern the health of a user experience, deriving from the analysis of those measurements the relationships and patterns that can be inferred. Ultimately, the automation that will give rise to truly adaptive applications is based on measurements and our understanding of them.

Inserting App Services into Shifting App Architectures
F5 Ecosystem | 12/23/2019

Inserting App Services into Shifting App Architectures

Application architectures have evolved several times since the early days of computing, and it is no longer optimal to rely solely on a single, known data path to insert application services. Furthermore, because many of the emerging data paths are not as suitable for a proxy-based platform, we must look to the other potential points of insertion possible to scale and secure modern applications.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
When Complexity Becomes the Enemy: How Operational Workflow Sprawl Threatens Performance and Delivery | F5