Artificial intelligence is here to stay—there’s no going back to the pre-2010s. The gains in productivity, innovation, and economic impact are simply too transformative.
But with those extraordinary benefits come serious risks for organizations adopting AI. One of the most urgent is Shadow AI—the growing use of AI tools by employees without IT leaders’ knowledge or approval. This practice is rapidly emerging as a significant security and compliance threat, even as it remains relatively new and continues to evolve.
Shadow AI is a fast-developing offshoot of Shadow IT, the long-standing challenge where employees deploy unsanctioned software, cloud services, or systems within the enterprise. While Shadow IT already strains governance and security, Shadow AI adds a dangerous new layer of complexity.
A few examples of Shadow AI:
These examples are just the beginning. Countless other Shadow AI scenarios are playing out every day across industries. The scale of the issue is so vast that Gartner® predicts that “by 2027, 75% of employees will acquire, modify or create technology outside IT’s visibility — up from 41% in 2022. As a result, top-down, highly centralized cybersecurity operating models will fail. Instead, CISOs must restructure cybersecurity into a lean, centralized function that supports a broad, federated set of experts and fusion teams embedded across the enterprise. This scales cybersecurity across the edge of the enterprise, closer to where technology and risk decisions are made and implemented.” 1
While exact financial losses from Shadow AI are still being calculated, the potential for harm is clear—and growing. Organizations that fail to act now risk serious incidents, regulatory exposure, and erosion of customer trust.
Among the most concerning vulnerabilities is prompt injection—an attack in which a malicious user manipulates the AI’s input to bypass restrictions, leak sensitive data, or execute unintended actions. AI models tend to trust input by default, making them vulnerable to such exploits. A successful prompt injection could trick AI into revealing internal data, corrupting automated processes, or undermining decision-making systems.
Another major worry is data leakage, especially of personally identifiable information (PII), protected health information (PHI), bank details and financial records, source code and proprietary IP, credentials and access keys, or customer records.
Compounding the problem is the potential for regulatory non-compliance. Unauthorized AI use can easily violate standards such as GDPR, HIPAA, PCI DSS, and the Sarbanes-Oxley Act—exposing companies to penalties and reputational harm.
In response to these risks, some organizations have banned employee use of non-approved generative AI tools. Samsung, for example, was one of the first in 2023 to prohibit the use of ChatGPT and similar AI-powered chatbots, citing concerns over leaks of sensitive internal information. Several banks and other enterprises have since issued similar restrictions or warnings.
Yet industry experts overwhelmingly advise against blanket bans.
Despite the legitimate risks, industry analysts recommend preventive policies and governance frameworks over outright prohibitions.
Bans tend to be counterproductive. They’re difficult to enforce, they often suppress innovation and morale, and they can drive AI use further underground—making it even harder to detect and control.
As one IT leader said in a Gartner Peer Community discussion about banning AI tools: “Bans simply don’t work. Even without policies, this action hurts innovation and sends the wrong message to staff and the world about our organization.”
Here are a few practical governance steps organizations can take right now:
Can technology offer a scalable solution? Absolutely.
F5 today announced data leakage detection and prevention for securing AI workloads and will launch powerful new capabilities in the coming months designed to help organizations comprehensively mitigate Shadow AI risk—particularly around unauthorized use of AI tools over encrypted channels (the default for most AI APIs and SaaS platforms).
Stay tuned for future announcements. Meanwhile, get all the latest F5 AI news on our Accelerate AI webpage.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
1Gartner, Maverick Research: CISOs Must Transform Their Role or Become Obsolete, June 19, 2025