LIVE WEBINAR

Mission briefing: F5 AI AppSec hero challenge


25 March 2026


India: 10:30 AM – 12:35 PM (IST)
Singapore: 1:00 PM – 3:05 PM (SGT)
Australia: 4:00 PM – 6:05 PM (AEDT)


The scenario

The integration of large language models (LLMs) into the modern tech stack has opened a new "frontier of vulnerability." Traditional firewalls aren't enough when the attack is hidden in a natural language prompt.

This CTF is designed for security engineers, developers, and DevOps practitioners who want to get their hands dirty. You won't just hear about AI security—you will live it.

Your objectives

In this 90-minute live lab, you will be granted access to a sandbox environment to execute and defend against:

  • Prompt Injection & jailbreaking: Bypassing system instructions to force unintended model behavior.
  • Sensitive data disclosure: Extracting PII and proprietary data through clever "social engineering" of the AI.
  • Model abuse: Overwhelming LLM resources to cause service denial.
  • F5 advanced defense: Observing how real-time inspection identifies and neutralizes these payloads without breaking the user experience.

What’s at stake?

Beyond the technical "level-up," we are rewarding the fastest and most precise operators:

  • Grand prize: iPhone 17 Pro
  • Runner up: Apple Watch Series 11

*Terms and conditions apply

System requirement: To participate in the CTF, you will need a stable internet connection and a modern web browser. Lab credentials will be issued during the mission briefing.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us