Secure model inference for Red Hat OpenShift AI

Protect AI-powered applications against prompt injection, sensitive data leakage, toxic content, and misuse with F5 AI Guardrails. Accelerate F5 deployments with certified Operators.

WAFs are not enough for AI security

Web application firewalls (WAFs) are essential to filter, monitor, and block malicious HTTP/S traffic from hitting AI apps, APIs, and models. But AI threats such as prompt injection, data leakage, and toxic content infiltrate at the content layer and are not caught by a WAFs. Skills gaps, operational complexity, and unresolved security questions also slow the transition from AI planning and proof of concepts to production.

Enterprises need security solutions at the content layer that protect their models, apps, and data across the AI lifecycle—without adding friction for security teams, developers, AI app owners, or end users.

AI security, where and when you need it

F5 AI Red Team delivers adaptive, agent-based testing to simulate adversarial attacks, proactively uncovering vulnerabilities in development and test, and validating attack resilience and performance once in production.

F5 AI Guardrails sits in front of your LLM endpoints, intercepting and evaluating inputs and responses. It mitigates risks including data leakage, harmful outputs, and adversarial threats with comprehensive runtime security for AI apps, APIs, models, and agents.

Both solutions are available as certified Operators for Red Hat OpenShift, integrating AI security directly into the Red Hat OpenShift AI, a specialized AI/ML platform-as-a-service built on top of OpenShift that’s designed to train, serve, and manage AI models. This integrated capability from F5 allows teams to address AI risk earlier in the development lifecycle while maintaining consistent security controls as AI applications move toward production.

Red Hat openshift ai diagram
Figure 1. AI Guardrails evaluates inputs and responses for LLM endpoints, blocking anything that violates active policies. AI Red Team proactively launches agent-based simulated attacks to ensure a strong security posture for AI systems—before deployment, and over time.

Accelerate enterprise AI security deployments

Certified OpenShift Operators for AI Guardrails and AI Red Team

Certified Red Hat OpenShift Operators automate the creation, configuration, and management of instances of applications running in the environment. When it comes to F5 solutions, they help enterprises deploy and operate AI security faster. Certified Red Hat OpenShift Operators for F5 AI Guardrails and F5 AI Red Team provide the foundational platform integration required to integrate AI security directly into environments using familiar, Kubernetes-native workflows. As a certified solution, these Operators have undergone rigorous testing by Red Hat to verify their high standard of security, interoperability, and lifecycle management to help customers scale deployments.

Together, Red Hat and F5 reduce operational complexity, accelerate deployment timelines, and help enterprises apply industry-leading AI security controls with less friction.

Proven AI quickstarts for AI Guardrails

Building on the Operator foundation, F5 is also delivering AI quickstarts that leverage Red Hat’s established quickstart framework, giving customers a faster path from proof-of-concept to production.

Red Hat AI quickstarts are a validated catalog of ready-to-run, industry-specific use cases that allow users to work directly with solutions for key scenarios. They include blueprints for deployments and configurations to quickly and safely deliver practical experiences for AI developers and security practitioners.

F5 is actively investing in AI quickstarts to support our joint customers on Red Hat OpenShift environments. F5’s AI quickstart for AI Guardrails demonstrates a complete RAG chatbot on OpenShift AI for a fictitious financial services organization; the same architecture applies to any industry handling sensitive data.

The AI quickstart for AI Guardrails includes four threat scenarios, supported by product scanners:

Use caseThreatF5 protections

Prompt injection, jailbreak, system prompt, obfuscation

Attacker overrides system instructions

Detect and block prompt attacks in inputs

PII masking and blocking

Model takes in or outputs sensitive personal data

Detect then block or redact sensitive data from inputs and AI outputs

Restricted topics

Model answers outside approved domain

Block restricted topics from AI inputs

EU AI Act

Subliminal manipulation, biometric surveillance, emotion recognition, and more

Block non-compliant inputs and responses

Get started

Red Hat OpenShift Operators for F5 AI Guardrails and F5 AI Red Team are available in the Red Hat Ecosystem Catalog for customers deploying AI workloads on Red Hat OpenShift. The AI Guardrails quickstart is available now through the Red Hat AI quickstart catalog.

Learn more about F5 and Red Hat’s partnership at f5.com/redhat.

KEY BENEFITS
Secure models and data

Protect LLMs from threatening inputs and block inappropriate outputs

Harden AI before you deploy

Actively test AI in development by simulating known attacks

Accelerate F5 deployments

Simplify and automate F5 deployments and lifecycle management

Explore AI security use cases

Get hands on with AI security use cases to accelerate design and configuration

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us