F5 Blog

Read about the latest trends in multicloud networking, API security, application services, and digital transformation.

All Blog Posts

All Blog Posts


F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture

F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.

Lessons we are learning from our security incident

F5 CISO Christopher Burger answers common questions from customers surrounding the recently disclosed security incident.

AI, inference, and tokens, oh my!

AI applications run on inference servers, driven by tokens—not traffic. Discover why understanding tokens and JSON data is critical to designing smarter infrastructure.

Controlling generative AI applications through context

Controlling generative AI apps means leveraging application, environmental, and business context. Learn why complete context ensures trust and adoption.

Current trends in cloud-native technologies, platform engineering, and AI

Gain key insights from F5 NGINX Annual Survey respondents into the latest trends in cloud-native technology, security, platform engineering, and AI.

Secure and optimize your AI journey with F5 and Google Cloud

Discover how F5 can help optimize your Google Cloud performance, enhance security, and increase cost-efficiency for AI initiatives.

Three things every CISO should know about API security

Ever wanted to know what organizations could learn from looking at API security from the attacker’s perspective? Read our blog post to find out.

F5 completes acquisition of CalypsoAI, introduces F5 AI Guardrails and F5 AI Red Team

Learn how F5 defines and deploys adaptive guardrails for AI systems

Introducing the CASI Leaderboard

Explore the new AI security index for emerging trends in AI security.

Inference: The most important piece of AI you’re pretending isn’t there

Scaling AI means scaling inference. Learn why inference servers are critical for managing performance, telemetry, and security in production AI workloads.