Download the full SecureIQLab report
AI workloads depend on fast, reliable data movement between storage and compute. When latency and jitter disrupt that flow, throughput drops, model training slows, and GPU resources are underutilized.
By sitting between compute and storage, F5 BIG-IP helps optimize traffic flow, improve connection efficiency, and stabilize throughput across distributed AI environments. This helps reduce the performance impact of latency and jitter, protects storage from bursty demand, and supports more consistent data delivery for training, fine-tuning, and RAG workloads. For architects designing scalable AI infrastructure, treating the storage data path as a managed network discipline, rather than a direct application-to-storage connection, helps maximize GPU utilization, improve pipeline reliability, and support scalable AI training and inference.
SecureIQLab validated testing of F5 VELOS and BIG-IP Local Traffic Manager in front of a MinIO cluster and found up to 3.3x higher S3-compatible storage throughput under real-world network conditions. The results show how F5 helps improve AI data delivery by optimizing the data path between storage and compute across distributed environments.
Download the SecureIQLab report to review the test methodology, architecture, and detailed throughput results validated by:

The impact of network latency and jitter on S3-compatible storage throughput in distributed AI environments
Techniques F5 BIG-IP uses to optimize data movement between compute and storage, including traffic management and transport optimization
Throughput performance across real-world network conditions, including local data center, SD-WAN, and long-haul scenarios
Note: Performance results are based on testing conducted by F5 in a controlled lab environment and validated by SecureIQLab. Results may vary depending on workload characteristics, infrastructure configuration, and network conditions.
Report originally published by SecureIQLab on April 2, 2026.