BLOG

Shadow AI: The Silent Security Risk Lurking in Your Enterprise

Rachael Shah Thumbnail
Rachael Shah
Published July 16, 2025

Artificial intelligence is here to stay—there’s no going back to the pre-2010s. The gains in productivity, innovation, and economic impact are simply too transformative.

But with those extraordinary benefits come serious risks for organizations adopting AI. One of the most urgent is Shadow AI—the growing use of AI tools by employees without IT leaders’ knowledge or approval. This practice is rapidly emerging as a significant security and compliance threat, even as it remains relatively new and continues to evolve.

Shadow AI is a fast-developing offshoot of Shadow IT, the long-standing challenge where employees deploy unsanctioned software, cloud services, or systems within the enterprise. While Shadow IT already strains governance and security, Shadow AI adds a dangerous new layer of complexity.

A few examples of Shadow AI:

  • A software engineer uses a personal ChatGPT, Gemini, or Copilot account to generate boilerplate code, translate languages, refactor legacy code, or write test cases—unintentionally pasting proprietary code into a public model. That code could be stored, logged, or exposed. Worse, the AI-generated output might include insecure logic, fabricated APIs, or code that violates licensing terms.
  • A communications specialist uploads a confidential strategy memo into an AI tool to summarize or draft a customer-facing message. That proprietary content is now sitting on third-party servers—outside IT’s oversight.
  • A sales rep installs a Chrome extension that auto-generates prospecting emails using AI that connects with both their email and CRM systems. The risks? Possible data leakage, internal policy violations, and noncompliance with regulations like GDPR or the California Consumer Privacy Act.

These examples are just the beginning. Countless other Shadow AI scenarios are playing out every day across industries. The scale of the issue is so vast that Gartner® predicts that “by 2027, 75% of employees will acquire, modify or create technology outside IT’s visibility — up from 41% in 2022. As a result, top-down, highly centralized cybersecurity operating models will fail. Instead, CISOs must restructure cybersecurity into a lean, centralized function that supports a broad, federated set of experts and fusion teams embedded across the enterprise. This scales cybersecurity across the edge of the enterprise, closer to where technology and risk decisions are made and implemented.” 1

Prompt injection and data leakage: top concerns

While exact financial losses from Shadow AI are still being calculated, the potential for harm is clear—and growing. Organizations that fail to act now risk serious incidents, regulatory exposure, and erosion of customer trust.

Among the most concerning vulnerabilities is prompt injection—an attack in which a malicious user manipulates the AI’s input to bypass restrictions, leak sensitive data, or execute unintended actions. AI models tend to trust input by default, making them vulnerable to such exploits. A successful prompt injection could trick AI into revealing internal data, corrupting automated processes, or undermining decision-making systems.

Another major worry is data leakage, especially of personally identifiable information (PII), protected health information (PHI), bank details and financial records, source code and proprietary IP, credentials and access keys, or customer records.

Compounding the problem is the potential for regulatory non-compliance. Unauthorized AI use can easily violate standards such as GDPR, HIPAA, PCI DSS, and the Sarbanes-Oxley Act—exposing companies to penalties and reputational harm.

Some companies respond with bans—but is that effective?

In response to these risks, some organizations have banned employee use of non-approved generative AI tools. Samsung, for example, was one of the first in 2023 to prohibit the use of ChatGPT and similar AI-powered chatbots, citing concerns over leaks of sensitive internal information. Several banks and other enterprises have since issued similar restrictions or warnings.

Yet industry experts overwhelmingly advise against blanket bans.

Why governance, not bans, is the better path

Despite the legitimate risks, industry analysts recommend preventive policies and governance frameworks over outright prohibitions.

Bans tend to be counterproductive. They’re difficult to enforce, they often suppress innovation and morale, and they can drive AI use further underground—making it even harder to detect and control.

As one IT leader said in a Gartner Peer Community discussion about banning AI tools: “Bans simply don’t work. Even without policies, this action hurts innovation and sends the wrong message to staff and the world about our organization.”

What organizations should do instead

Here are a few practical governance steps organizations can take right now:

  1. Establish a clear AI usage policy
    Define what tools are approved, how data should be managed, which use cases are acceptable, and where guardrails exist. Make it crystal clear what’s permitted—and why.
  2. Maintain a vetted list of AI tools
    Evaluate tools for data privacy, security, and regulatory compliance. Encourage employees to use these instead of unvetted platforms. For instance, use enterprise versions of tools such as ChatGPT with logging and data loss protections and adopt internal AI assistants integrated into company platforms.
  3. Train employees and build awareness
    Help staff understand the risks—from data leakage and hallucinated content to compliance failures. Most misuse is unintentional.
  4. Monitor, then adapt
    Consider using (ethically and legally compliant) monitoring to detect unauthorized use. Respond with education rather than punishment. Continually evolve your policies as AI capabilities change.

Coming soon: F5 technology to help manage Shadow AI

Can technology offer a scalable solution? Absolutely.

F5 today announced data leakage detection and prevention for securing AI workloads and will launch powerful new capabilities in the coming months designed to help organizations comprehensively mitigate Shadow AI risk—particularly around unauthorized use of AI tools over encrypted channels (the default for most AI APIs and SaaS platforms).

Stay tuned for future announcements. Meanwhile, get all the latest F5 AI news on our Accelerate AI webpage.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

1Gartner, Maverick Research: CISOs Must Transform Their Role or Become Obsolete, June 19, 2025