BLOG

The Use of Generative AI Tools in the Workplace Demands More Security

Rachael Shah Miniature
Rachael Shah
Published August 22, 2023

Generative AI is like that one really shiny object that no one can take their eyes off of.

Almost every business worldwide is drooling over the potential generative AI tools have in transforming their business—and for good reason.

Generative AI, a form of machine learning that creates content like text and images in response to natural language requests, is a game-changing technology. From boosting productivity to improving personalization, generative AI tools deliver a string of benefits.

For example, in a recent poll by Gartner, more than 2,500 executives indicate customer experience and retention (38%) as the primary purpose of their generative AI investments. That’s followed by other areas like revenue growth (26%), cost optimization (17%), and business continuity (7%).

On top of that, in a separate survey of 3,000 CEOs by IBM, 69% of CEO respondents see the broad benefits of generative AI across their organizations. However, only 30% of non-CEO senior executives say their organization is ready to adopt generative AI responsibly.

Let’s be frank: That’s alarming. Less than a third of senior execs say their organization is ready to adopt generative AI responsibly?

It’s easy to get swept off your feet by all the positive gains you could potentially pocket by using generative AI tools. But let’s not forget: Using generative AI tools can pose major risks and challenges.

With how fast generative AI is evolving, organizations can find themselves in hot water quickly if they’re not careful about how their people are leveraging generative AI tools and staying aware of the data being fed into generative AI interfaces.

Here are just a few of the many ways in which generative AI can abruptly turn a positive, beneficial experience into a costly and negative security headache:

  1. Leaking sensitive corporate data.
    You may have heard or read about Samsung staff accidentally uploading code to Open AI’s chatbot ChatGPT. Not only did that expose proprietary data, but it created a major, global PR nightmare for Samsung. And, likely, Samsung isn’t the only organization that this has happened or may happen to. It could be your company next.

  2. Disseminating misinformation and biases.
    Generative AI tools like ChatGPT are powered by large language models (LLMs). And LLMs are trained on data sets from articles, websites, and everything in between—including bad and incorrect information. It’s easy to be tricked into thinking that generative AI tools won’t produce wrong information or proliferate biases and use it in your campaigns or to build your assumptions on. But before you know it, the line between truth and not is blurred, and so is your reputation. Plus the biases and incorrect information produced by generative AI chatbots don’t even begin to approach the problem posed by AI hallucinations!

  3. Mishandling personally identifiable information (PII).
    PII can be collected during interactions with generative AI tools. It’s up to the businesses behind those tools to uphold strict data privacy measures. If they don’t, you might find yourself battling account takeover. Or in steep legal trouble and fallout with your customers if bad actors get ahold of their PII.

Luckily, many organizations are taking steps to uphold safe practices. Several companies offer training on the positive and negative aspects of generative AI. Or encourage staff to responsibly explore and experiment. Others have designed generative AI policies for employee acknowledgement and required guidance.

While these measures certainly have merit, the popularity, complexity, and fast-paced evolution of generative AI tools demand more security. Plus, there’s always going to be human error in the equation.

Now I’m sure you’ve heard it once. Or maybe many times: What you can’t see can hurt you. And when it comes to the use of generative AI tools in the workplace, that couldn’t be truer. A lack of awareness could lead to your company getting the short end of the stick.

To protect against threats of using generative AI in the workplace, companies must take a zero trust approach. Remember that the key tenet of zero trust is, “Never trust. Always verify.” And one key strategy of zero trust security that you should prioritize today is detecting access to generative AI tools.

By enabling greater visibility and fine-grained control into how employees are leveraging generative AI tools at work, security teams are empowered to identify and manage potential risks. This helps organizations:

  1. Protect data, PII, code, intellectual property (IP), and more.
  2. Enable and apply policies that support company guidelines.
  3. Maintain their security against intentional or accidental misuse of generative AI tools.

As the generative AI landscape continues to rapidly unfold, companies can rest easier with a zero trust security framework that provides generative AI tool detection.

To learn more about how F5 can help you mitigate potential risks of using generative AI tools in the workplace, check out this F5 BIG-IP SSL Orchestrator DevCentral article.