Safeguarding AI systems: Why security must catch up to innovation

F5 Ecosystem | August 04, 2025

Artificial intelligence is potentially life-changing—and already has been in profound ways. From accelerating breakthroughs in medicine and education to reshaping work and everyday life, AI is transforming how we live and operate. But alongside these advances, AI presents powerful opportunities for cybercriminals.

Today, AI systems are actively targeted by adversaries who exploit vulnerabilities through data poisoning, manipulated outputs, unauthorized model theft via distillation, and exposed private data. These aren’t speculative risks; they’re real, rapidly evolving, and potentially devastating financially. Models are also being used to propagate massive improvements in email attacks and SMS / voice fraud, and deepfakes are increasingly difficult to detect, with several generating multi-million dollars in losses.

“To continue reaping the benefits of AI, organizations must treat its security with the same urgency they bring to networks, databases, and applications.”

According to the 2025 Stanford AI Index Report, the number of AI-related security incidents surged by 56.4% in 2024, reaching 233 reported cases. These weren’t mere glitches or technical hiccups. They involved serious compromises, from privacy violations and misinformation amplification to algorithm manipulation and breakdowns that put sensitive decisions at risk.

But as always, on of our favorite stats is dwell time, or the time between breach and detection. IBM’s Q1 2025 report revealed that AI-specific compromises take an average of 290 days to detect and contain—far longer than the 207-day average for traditional data breaches. That’s nearly 10 months of exposure, leaving these AI-augmented attackers ample time to cause serious harm.

Why most enterprises aren’t ready

To continue reaping the benefits of AI, organizations must treat its security with the same urgency they bring to networks, databases, and applications. But the current imbalance between adoption and protection suggests a different story.

F5’s 2025 State of AI Application Report underscores this point. Only 2% of organizations surveyed were considered highly secure and ready to scale AI safely. Meanwhile, 77% faced serious challenges related to AI security and governance.

The report also revealed that only a fraction of moderately prepared companies have deployed foundational safeguards. Just 18% had implemented AI firewalls, and only 24% practiced continuous data labeling, a key method for detecting adversarial behavior. Compounding the issue is the growing use of Shadow AI: unauthorized or unsanctioned AI tools that create dangerous visibility gaps in enterprise environments.

In the race to deploy AI for competitive gain, many organizations are inadvertently expanding their attack surface.

What makes AI vulnerable

AI’s unique characteristics expose it to novel forms of attack. Some of the most pressing vulnerabilities include:

  • Data poisoning: Attackers subtly inject corrupt or misleading data into training sets, compromising the behavior of AI models. In 2024, University of Texas researchers demonstrated how malicious content embedded in referenced documents could influence model outputs—persisting even after the documents were removed.
  • Model inversion and extraction: These attacks allow adversaries to reconstruct sensitive training data or replicate proprietary models. Real-world cases include the recovery of patient images from diagnostic systems and the reconstruction of private voice recordings and internal text from language models.
  • Evasion attacks: By making minute, often imperceptible changes to input data, attackers can trick AI models into producing incorrect outputs. One example: researchers fooled an autonomous vehicle’s vision system into misclassifying a stop sign as a speed limit sign by adding innocuous-looking stickers.
  • Prompt injection: Large language models (LLMs) are susceptible to carefully crafted input that manipulates their behavior. In one case, a ChatGPT-powered chatbot used by Chevrolet dealerships was tricked into agreeing to sell a car for $1—an outcome that exposed both reputational and legal risks.

These threats are not theoretical. They are active, and they are already undermining AI’s safety and reliability across industries.

Building a strong AI defense strategy

To meet these challenges, organizations must adopt a well-rounded defense strategy that addresses both general cybersecurity and AI-specific risks. The following five steps can help enterprises secure their AI systems:

  1. Strengthen data governance
    Build a clear inventory of AI assets—including models, APIs, and training datasets—and enforce tight access control policies. Data is foundational to AI, and its integrity must be protected at every level.
  2. Test continuously
    Move beyond traditional code reviews and implement adversarial testing and red teaming. These methods help uncover weaknesses such as model inversion and prompt injection before attackers exploit them.
  3. Embrace privacy-first design
    Incorporate encryption, data minimization, and differential privacy techniques. These approaches limit the risk of sensitive data exposure, even if a breach occurs.
  4. Adopt zero trust architecture
    Apply a “never trust, always verify” philosophy across all AI systems. Grant the minimum access necessary to every component and user and rigorously verify all activity.
  5. Monitor AI behavior in real time
    Implement tools and systems that watch for anomalies in model behavior or input patterns. Monitor for things like excessive API calls, suspicious prompts, or abnormal outputs—all of which could signal active threats.

F5’s approach to AI security

As more organizations embrace hybrid and multicloud environments, F5 through our Application Delivery and Security Platform (ADSP) is delivering AI-native security capabilities designed to protect modern infrastructure.

Part of this platform, F5 AI Gateway provides defense against prompt injection and data leakage by intelligently inspecting and routing LLM requests. Advanced API security solutions—available via F5 Distributed Cloud API Security and NGINX App Protect—safeguard APIs from misuse, exfiltration, and abuse.

Also a part of F5 ADSP, F5 Distributed Cloud Bot Defense uses machine learning to detect and block automated threats like credential stuffing with minimal false positives. And F5 BIG-IP Advanced WAF solutions secure applications and APIs while offloading security tasks from GPUs, improving performance in AI-intensive workloads.

In addition, F5’s AI Reference Architecture offers a blueprint for secure, reliable AI infrastructure across hybrid and multicloud environments. F5 also collaborates with leading AI innovators, including Intel, Red Hat, MinIO, and Google Cloud Platform, among many others, to help customers scale securely and efficiently.

Final thoughts: Secure AI, secure future

AI is transforming every industry it touches—but its potential comes with unprecedented risks. As threats grow more sophisticated, security leaders must move with urgency and foresight, embracing proactive tools, smarter architecture, and policy-driven protection.

AI security must be integrated into the very fabric of enterprise strategy. With the right combination of regulation, technology, and culture—anchored by proven frameworks like the U.S. National Institute of Standard and Technology’s AI Risk Management Framework and supported by platforms such as F5 ADSP—organizations can harness the full promise of AI while defending against its darker edge.

The AI frontier has arrived. The time to secure it is now.

Come to the panel discussion at Black Hat

If you’re planning to be in Las Vegas this week for Black Hat USA 2025, please join F5 Field CISO Chuck Herrin and other experts for a panel discussion during AI Summit as they discuss how to ramp up digital defenses in the AI age.

Also, be sure to visit our webpage to learn more about F5’s enterprise AI delivery and security solutions.

Share

Related Blog Posts

The everywhere attack surface: EDR in the network is no longer optional
F5 Ecosystem | 11/12/2025

The everywhere attack surface: EDR in the network is no longer optional

All endpoints can become an attacker’s entry point. That’s why your network needs true endpoint detection and response (EDR), delivered by F5 and CrowdStrike.

F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift
F5 Ecosystem | 11/11/2025

F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift

F5 collaborates with Red Hat to deliver a solution that combines the high-performance app delivery of F5 NGINX with Red Hat OpenShift’s enterprise Kubernetes capabilities.

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5 Ecosystem | 10/28/2025

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture

F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.

F5 Silverline Mitigates Record-Breaking DDoS Attacks
F5 Ecosystem | 08/26/2021

F5 Silverline Mitigates Record-Breaking DDoS Attacks

Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.

Volterra and the Power of the Distributed Cloud (Video)
F5 Ecosystem | 04/15/2021

Volterra and the Power of the Distributed Cloud (Video)

How can organizations fully harness the power of multi-cloud and edge computing? VPs Mark Weiner and James Feger join the DevCentral team for a video discussion on how F5 and Volterra can help.

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
F5 Ecosystem | 12/08/2020

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies

David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
Safeguarding AI systems: Why security must catch up to innovation | F5