How Containers Change Scalability

F5 Ecosystem | June 22, 2020

The term 'cloud-scale' is often tossed around blithely. It's used in marketing a lot to imply REALLY BIG scale as opposed to, I suppose, traditional not-as-big-but-still-significant scale.

But just as there are kernels of truth hidden in myths and legends, there is some truth to the way in which the term cloud-scale is thrown around.

The truth is that cloud—and now containers—have changed the underlying foundation of scalability. This change is rooted in the fundamental differences between vertical and horizontal scaling strategies. For the first decade of the century we operated almost exclusively on the notion that vertical scale was the best way to achieve the speeds and feeds we needed. That meant more bandwidth, more compute, and more memory. More ports. Greater density. Faster processing.

But with the advent of the cloud era the focus flipped to horizontal scale. We still need more bandwidth and compute and processing power, but we've learned how to distribute that need. We still need more hardware, we just assemble it from multiple sources rather than a single, monolithic entity.

It's the way resources are assembled that changes the game. And make no mistake, the game has changed thanks to containers.

Scale today depends on the control plane. The speed of the API used to launch and decommission resources is perhaps more important than the speed of the load balancing service itself. The speed of service discovery in an environment where resources are launched and retired in a matter of minutes becomes paramount to delivering requests to an available instance.

More than half (52%) of containers have a lifespan of less than five minutes according to the Sysdig Container Usage Report 2019:

  • 22% <= 10 seconds
  • 17% <= 1 minute
  • 15% <= 5 minutes

Nearly half (42%) operate between 201 and 500 container instances. To maintain accuracy, the control plane is updating components frequently. Far more frequently than even cloud, and certainly far more frequently than every seen with monolithic applications.

The speed with which the ingress controller (the load balancing mechanism) is updated to reflect the current set of available resources, then, becomes a critical capability. Because if it’s wrong, a consumer request could be directed to a resource that no longer exists—or is hosting a completely different service. Either way, the result is a longer response as the request is redirected to a resource that is available. The consumer must wait longer for a response and may choose to walk away instead.

All this points to the speed of the control plane as a blocking factor in the scalability of applications deployed in a containerized environment.

Ultimately this means the scale of the control plane is an issue. The design of the API is an issue. The mechanisms by which requests and updates to the API are authenticated and authorized is an issue.

The control plane is on center stage when it comes to scalability today. A robust, scalable control plane is not a nice to have. It's an RFC MUST today.

Share
Tags: 2020

About the Author

Lori Mac Vittie
Lori Mac VittieDistinguished Engineer and Chief Evangelist

More blogs by Lori Mac Vittie

Related Blog Posts

F5 ADSP Partner Program streamlines adoption of F5 platform
F5 Ecosystem | 11/19/2025

F5 ADSP Partner Program streamlines adoption of F5 platform

The new F5 ADSP Partner Program creates a dynamic ecosystem that drives growth and success for our partners and customers.

Accelerate Kubernetes and AI workloads with F5 BIG-IP and AWS EKS
F5 Ecosystem | 11/17/2025

Accelerate Kubernetes and AI workloads with F5 BIG-IP and AWS EKS

The F5 BIG-IP Next for Kubernetes software will soon be available in AWS Marketplace to accelerate managed Kubernetes performance on AWS EKS.

F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift
F5 Ecosystem | 11/11/2025

F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift

F5 collaborates with Red Hat to deliver a solution that combines the high-performance app delivery of F5 NGINX with Red Hat OpenShift’s enterprise Kubernetes capabilities.

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5 Ecosystem | 10/28/2025

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture

F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.

F5 Silverline Mitigates Record-Breaking DDoS Attacks
F5 Ecosystem | 08/26/2021

F5 Silverline Mitigates Record-Breaking DDoS Attacks

Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
F5 Ecosystem | 12/08/2020

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies

David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us