PRESS RELEASE

F5 Accelerates AI in Service Provider Edge Environments with NVIDIA BlueField-3 DPUs

Published March 12, 2025
CONTACTS

F5 Public Relations Office
(Kyodo Public Relations Co., Ltd.) Otsuka, Suguro
F5-pr@kyodo-pr.co.jp

F5 BIG-IP Next Cloud-Native Network Functions Deployed on NVIDIA BlueField-3 DPU Enhance Data Management and Security, Enable New Edge AI Innovations, and Advance the Future of AI-RAN

 

* This is an abridged translation of a press release announced at Mobile World Congress (MWC) on March 3.

 

F5 (NASDAQ: FFIV), the global leader in multi-cloud application security and delivery, announced new Cloud-Native Network Functions (CNFs) deployed on NVIDIA's BlueField-3 DPU, further strengthening the technical partnership between the two companies. This solution provides optimized performance in Kubernetes environments, enabling advanced edge firewall protection, high-performance traffic management, and comprehensive web app and API security, including support for emerging Edge AI use cases.

The F5 Application Delivery and Security Platform integrates essential capabilities previously delivered as separate solutions–including high-performance load balancing, multi-cloud networking, complete web application and API security, and AI Gateway functionality–onto a single unified platform.

Christopher Rodriguez, Research Director at IDC's Cybersecurity & Trust practice, states: "While AI accelerates innovation, it introduces unprecedented complexity and risk to businesses. Combining application delivery and security capabilities into a single unified platform becomes a necessity for organizations aiming to stay one step ahead in this rapidly evolving landscape."

For more details on the F5 Application Delivery and Security Platform, please visit f5.com.

 

New Announcements of Platform Solutions and Innovation

As part of its ongoing commitment to innovation related to the F5 Application Delivery and Security Platform, F5 has unveiled several new solutions and features. These innovations empower F5's customers to leverage AI-driven capabilities to reduce complexity, simplify operations, and strengthen security posture across all applications. These innovations include: 

  • F5 AI Gateway: The F5 AI Gateway is available today for enterprises looking to streamline interactions across applications, APIs, and Large Language Models (LLMs), driving adoption of enterprise AI. Built as a powerful container-based solution for optimized performance, observability, protection, and cost reduction, the AI Gateway promises operational and security teams a seamless path towards efficient AI service deployment, significantly improved data output quality, and an enhanced user experience. 

  • F5 AI Assistant for NGINX One: Following the release of the AI Assistant for F5 Distributed Cloud Services in 2024, F5 will also introduce the AI Assistant into NGINX One. Powered by the F5 AI Data Fabric, this assistant functions as an intelligent partner, helping network, security, development, and platform operations teams streamline operations and improve return-on-investment (ROI). With a natural language interface, the NGINX One AI Assistant simplifies application delivery configuration and optimization, proactively addresses threats, and identifies anomalies prior to production environments.

  • F5 AI Assistant for BIG-IP and iRules Code Generation and Explanation: Later this year, F5 will release an AI Assistant for BIG-IP, bringing new levels of automation and intelligence into the management of iRules. Customers can automate the creation, maintenance, and optimization of iRules, significantly streamlining resource usage and ensuring effective traffic management and secure application delivery. AI-driven tools generate optimal, validated iRules consistent with industry best practices, reducing errors and providing greater accuracy, reliability, and consistent outcomes. 

  • F5 VELOS CX1610 Chassis and BX520 Blade: F5 expanded its VELOS product line with powerful new capabilities, providing an integrated and customizable solution optimized for data-intensive workloads from AI and other modern applications aimed at service providers and large enterprises. The new VELOS CX1610 chassis and BX520 400Gbps blades support multi-terabit throughput, complying with top Network Equipment-Building System (NEBS) standards, and supporting the most demanding 5G Fixed Wireless Access (FWA) services. The F5 VELOS product line delivers optimized data packet routing, granular security, effective load balancing, and low latency, addressing the intensive real-time data and ingestion requirements inherent to AI-driven solutions.

 

Deployment of Cloud-Native Network Functions (CNFs) on NVIDIA BlueField-3 DPU Accelerates AI Workloads and Enables New Edge Use Cases

The deployment of cloud-native network functions at the edge brings applications closer to users and data, fostering data sovereignty and significantly improving user experience. Additionally, this technological approach provides critical support in real-time AI use-cases, such as: 

  • Enabling immediate decision-making, supporting autonomous vehicles, and detecting fraud.

  • Improving natural language processing (NLP) and augmented-reality (AR) applications.

By embedding Cloud-Native Network Functions (CNFs) in NVIDIA's BlueField-3 DPUs, organizations benefit from significant performance enhancements in Kubernetes environments. Through the integration of CNFs, F5 enables customers to not only achieve higher performance and lower latency but also substantially reduce power consumption and operating costs. Leveraging the technical collaboration between F5 and NVIDIA, solutions built using the NVIDIA DPU running the F5 Cloud-Native Network Functions enhance new use-cases such as Edge AI and next-generation AI-RAN networks.

F5 continues to leverage the power of the NVIDIA DPU, powered by the NVIDIA DOCA software framework, achieving seamless integration and accelerating various networking and security offload tasks. This comprehensive software integration helps enable fast development, easy onboarding, and powerful performance capabilities.

Deploying cloud-native network functions at the edge creates critical opportunities for service providers, enabling innovative services such as distributed access architecture (DAA) and private 5G edge security services. As shown by announcements at Mobile World Congress, AI-RAN (AI-driven Radio Access Network) also continues to gain momentum.

 

Unleashing the Potential of AI-RAN with NVIDIA and F5

F5 BIG-IP Next Cloud-Native Network Functions Deployed on NVIDIA BlueField-3 DPUs Enhance Data Management and Security, Enabling Edge AI Innovations and Driving the Future of AI-RAN

 

* This is a summarized translation of a press release published at Mobile World Congress (MWC) on March 3.

 

F5 (Headquarters: Washington, USA; NASDAQ: FFIV), a global leader in multi-cloud application security and delivery, has announced new Cloud-Native Network Functions (CNFs) deployed on NVIDIA BlueField-3 DPUs, further deepening the companies' technical collaboration. This solution provides proven network infrastructure functions from F5, including edge firewall, DNS, and DDoS mitigation, accelerated through NVIDIA BlueField-3 DPUs as lightweight cloud-native services optimized for Kubernetes environments and supporting emerging Edge AI use cases.

The F5 Application Delivery and Security Platform supports the majority of the world's Tier-1 5G, mobile, and fixed-line communication networks. However, today's service providers recognize the challenge of scaling AI-driven applications across distributed environments, as existing core network infrastructure often lacks the necessary computing capacity to perform AI inferencing efficiently at scale.

By embedding F5's Cloud-Native Network Functions (CNFs) running on NVIDIA DPUs into Edge and Far-Edge infrastructure, computing resources can be optimized, significantly reducing power consumption per Gbps and lowering overall operational costs. At the same time, more stringent security measures become essential to enable higher utilization of edge infrastructure capabilities and enhance subscription-based services with advanced AI capabilities. Technologies from F5 and NVIDIA BlueField minimize latency while delivering sophisticated traffic management and security.

Deploying cloud-native network functions at the Edge brings applications closer to users and data, enhancing data sovereignty, improving user experience, and reducing costs associated with power, space, and cooling. This approach, enabling low latency, is crucial for AI applications and use cases, such as:

  • Supporting instantaneous decision-making, autonomous vehicle operations, and fraud detection.

  • Enabling real-time user interactions, including Natural Language Processing (NLP) tools and AR/VR experiences.

  • Continuous monitoring and processing needed for healthcare devices and manufacturing robots.

Integrating the BIG-IP Next Cloud-Native Network Functions on NVIDIA BlueField-3 DPUs further expands the previously introduced F5 BIG-IP Next for Kubernetes solution deployed on NVIDIA DPUs. F5 will continue leveraging the NVIDIA DOCA software framework, allowing seamless integration of F5's solutions with NVIDIA BlueField DPUs. This comprehensive development framework provides robust APIs, libraries, and tools for accessing hardware acceleration capabilities in NVIDIA BlueField DPUs. By utilizing DOCA, F5 ensures compatibility across generations of BlueField DPUs, enabling rapid integration and high performance across various network and security offload scenarios. Accelerating F5 cloud-native network functions with NVIDIA BlueField-3 DPUs also frees up CPU resources, which can then be reallocated to run other applications.

Edge deployments represent crucial opportunities for service providers, including distributed N6-LAN solutions for User Plane Functions (UPF), edge security services supporting Distributed Access Architectures (DAA), and private 5G networks. AI-RAN (AI Radio Access Network) is also gaining momentum, as recently demonstrated by SoftBank announcing a development environment utilizing NVIDIA technology.

 

Ahmed Guetari, Vice President and General Manager, Service Providers at F5, noted: "Our customers continue to invest in new AI infrastructure, driving a closer partnership between F5 and NVIDIA. In particular, service providers increasingly recognize the edge as an area of significant growth and innovation. This domain presents numerous possibilities for data ingestion and real-time processing, offering ample opportunities to introduce new services and AI capabilities."

Ash Bhalgat, Senior Director of AI Networking & Security Partnerships at NVIDIA, added: "Driven by the growing demands for edge-based AI inference, cloud-native network functions accelerated by NVIDIA's BlueField-3 DPUs allow service providers to achieve unparalleled performance, security, and efficiency. F5 and NVIDIA together are uniquely positioned to offer solutions that place AI closer to the user, creating powerful edge-AI use cases."

The general availability of F5’s BIG-IP Next Cloud-Native Network Functions deployed on NVIDIA BlueField-3 DPUs is scheduled for June 2025.

 

CONTACTS

F5, Inc. (NASDAQ: FFIV) is the global leader that delivers and secures every app. Backed by three decades of expertise, F5 has built the industry’s premier platform—F5 Application Delivery and Security Platform (ADSP)—to deliver and secure every app, every API, anywhere: on-premises, in the cloud, at the edge, and across hybrid multicloud environments. F5 is committed to innovating and partnering with the world’s largest and most advanced organizations to deliver fast, available, and secure digital experiences. Together, we help each other thrive and bring a better digital world to life.

For more information visit f5.com
Explore F5 Labs threat research at f5.com/labs
Follow to learn more about F5, our partners, and technologies: Blog | LinkedIn | X | YouTube | Instagram | Facebook

F5 is a trademark, service mark, or tradename of F5, Inc., in the U.S. and other countries. All other product and company names herein may be trademarks of their respective owners.