Testing reveals up to 3x throughput gains for MinIO storage with F5

F5 ADSP | April 13, 2026

AI workloads do not fail only when models underperform. They fail when the data path cannot keep up. As enterprises scale model training, fine-tuning, and retrieval-augmented generation (RAG), the movement of data between storage and compute has become a critical performance constraint. When network latency and jitter disrupt object storage throughput, model pipelines slow, GPUs sit idle, and infrastructure costs rise.

That is why F5 is announcing new independent performance validation showing how F5 BIG-IP can materially improve AI data delivery for S3-compatible storage environments.

Independent validation shows higher throughput during functional testing

SecureIQLab has validated that F5 BIG-IP Local Traffic Manager (LTM) running on F5 VELOS CX410 chassis with two BX520 blades, deployed in front of a MinIO cluster, delivered up to 332% higher throughput under latency and jitter conditions compared with baseline traffic. The results show that as network impairment increased, the value of introducing a programmable control point into the storage data path became even more pronounced.

This matters because most AI infrastructure conversations still focus on compute and storage in isolation. In practice, AI performance depends on the full system. If the network path between storage and compute is unstable, even the most capable GPU cluster cannot operate efficiently. For architects building AI factories and large-scale inference environments, the storage data path must be treated as a disciplined part of the overall architecture, not as a passive connection between applications and object stores.

SecureIQLab has validated that F5 BIG-IP LTM running on F5 VELOS CX410 chassis with two BX520 blades, deployed in front of a MinIO cluster, delivered up to 332% higher throughput under latency and jitter conditions.

The SecureIQLab report evaluated S3-compatible storage throughput across realistic scenarios, including local data center environments, SD-WAN conditions, and long-haul networks with up to 75 milliseconds of latency and 5 milliseconds of jitter. Under the highest latency scenarios, F5 sustained materially higher throughput than the baseline configuration, demonstrating the impact of TCP proxying, connection reuse, and traffic optimization in stabilizing object transfers. The testing also measured raw encrypt/decrypt throughput, with F5 VELOS CX410 chassis deployment reaching 173.65 Gbps using 1 GB objects in a two-blade BX520 configuration.

In addition to increased throughput up to 332%, theSecureIQLab performance validation report includes the following key findings:

  • In a low-latency enterprise SD-WAN scenario (10 ms latency, 2 ms jitter), F5 VELOS increased S3 throughput from 62.5 Gbps (performance without F5 VELOS) to 121.7 Gbps—a 95% performance improvement.
  • In a low-latency broadband/VPN scenario (10 ms latency, 5 ms jitter), F5 VELOS increased S3 throughput from 46.8 Gbps (performance without F5 VELOS) to 119.6 Gbps—a 155% performance improvement.
  • In a high-latency multicloud backbone scenario (75 ms latency, no jitter), F5 VELOS increased S3 throughput from 10.9 Gbps (performance without F5 VELOS) to 41.5 Gbps—a 281% performance improvement.
  • In a high-latency, high-variability edge scenario (75 ms latency, 5 ms jitter), F5 VELOS increased S3 throughput from 9.6 Gbps (performance without F5 VELOS) to 40.1 Gbps—a 316% performance improvement.
  • In a high-latency SD-WAN multicloud scenario (75 ms latency, 2 ms jitter), F5 VELOS increased S3 throughput from 9.5 Gbps (performance without F5 VELOS) to 41.2 Gbps—a 332% performance improvement.

Latency is the primary limiter of S3-compatible storage throughput

Latency between the S3-compatible storage client and the storage nodes affected throughput much more than expected. For example, the baseline results showed that throughput at 10 ms latency was reduced to approximately 60% of the 0 ms latency baseline throughput. The baseline results also showed that throughput at 75 ms latency was reduced to approximately 8.25% of the 0 ms latency baseline throughput.

Although F5 expected jitter to have a greater relative impact on the throughput vs. latency, the exact opposite was true. In fact, latency had a greater impact than jitter on throughput between the S3 client and storage nodes.

An F5 VELOS with BIG-IP LTM acting as a proxy optimizes traffic between S3-compatible storage clients and storage nodes, increasing throughput on real-world networks at much higher rates than expected at the outset of the validation.

Data delivery is foundational to AI architecture

For business and technology leaders, the takeaway is straightforward: AI infrastructure performance is no longer just a compute problem or a storage problem. It is a data delivery problem. As part of the F5 Application Delivery and Security Platform (ADSP), F5 BIG-IP provides a scalable, secure, and high-performance entry point for ingesting datasets used in AI model training, fine-tuning, and RAG workflows. By combining intelligent traffic management, policy-driven control, and resilient failover, F5 helps make AI pipelines more predictable, governable, and efficient.

That value shows up in three architectural advantages.

First, F5 improves distributed data transport by shaping and optimizing the movement of massive datasets across storage and compute infrastructure. Second, F5 enables loose coupling and endpoint resilience, helping organizations maintain performance and flexibility as storage architectures evolve. Third, F5 brings programmable traffic management to the AI data path, allowing teams to apply policy, observability, and control where they previously had little visibility.

Ultimately, this allows organizations to maximize the return on AI infrastructure investments while ensuring systems perform reliably under real-world conditions, not just in ideal lab environments. It also signals a broader industry shift: application delivery principles are becoming foundational to modern AI infrastructure design.

Download the full SecureIQLab report

Read the full SecureIQLab performance validation report, which details the methodology, topology, and results behind these findings to see how F5 helps optimize, stabilize, and scale AI data delivery for MinIO and other S3-compatible storage environments.

F5’s focus on AI doesn’t stop here—explore all the ways F5 is delivering and securing AI applications.

Share

About the Authors

Florin Meilescu
Florin MeilescuSenior Principal Software Engineer | F5

More blogs by Florin Meilescu
Muppalla  Sridhar
Muppalla SridharPrincipal Technical Marketing Engineer | F5

More blogs by Muppalla Sridhar
Paul Pindell
Paul PindellPrincipal Solutions Architect | F5

More blogs by Paul Pindell

Related Blog Posts

F5 Distributed Cloud Services: Security innovation built for operational scale
F5 ADSP | 03/30/2026

F5 Distributed Cloud Services: Security innovation built for operational scale

Learn how the latest upgrade to F5 Distributed Cloud Services advances AI driven security while strengthening the operational foundations teams need to run at scale.

From dashboard fatigue to operational excellence: Why XOps needs F5 Insight for ADSP
F5 ADSP | 03/26/2026

From dashboard fatigue to operational excellence: Why XOps needs F5 Insight for ADSP

Learn how F5 Insight for ADSP lays the visibility foundation for XOps—turning fragmented signals across applications and infrastructure into actionable intelligence.

The hidden cost of unmanaged AI infrastructure
F5 ADSP | 01/20/2026

The hidden cost of unmanaged AI infrastructure

AI platforms don’t lose value because of models. They lose value because of instability. See how intelligent traffic management improves token throughput while protecting expensive GPU infrastructure.

Govern your AI present and anticipate your AI future
F5 ADSP | 12/18/2025

Govern your AI present and anticipate your AI future

Learn from our field CISO, Chuck Herrin, how to prepare for the new challenge of securing AI models and agents.

F5 recognized as one of the Emerging Visionaries in the Emerging Market Quadrant of the 2025 Gartner® Innovation Guide for Generative AI Engineering
F5 ADSP | 11/25/2025

F5 recognized as one of the Emerging Visionaries in the Emerging Market Quadrant of the 2025 Gartner® Innovation Guide for Generative AI Engineering

We’re excited to share that F5 has been recognized in 2025 Gartner Emerging Market Quadrant(eMQ) for Generative AI Engineering.

Self-Hosting vs. Models-as-a-Service: The Runtime Security Tradeoff
F5 ADSP | 05/01/2025

Self-Hosting vs. Models-as-a-Service: The Runtime Security Tradeoff

As GenAI systems continue to move from experimental pilots to enterprise-wide deployments, one architectural choice carries significant weight: how will your organization deploy runtime-based capabilities?

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us