Strategies

AI is Here: How Should CISOs Respond?

AI tools are spreading rapidly and CISOs need to be ready.
June 30, 2023
5 min. read

Introduction

With artificial intelligence (AI) use growing in the enterprise, Chief Information Security Officers play a critical role in its implementation and adoption. CISOs need to prepare for the risks associated with AI content creation as well as AI-assisted security threats from attackers. By following some key best practices, we’ll be better prepared to safely welcome our new robot overlords into the enterprise!

AI is Growing Fast

Artificial intelligence isn’t a brand-new development; for example, AI and machine learning (ML) already drive many of our solutions here at F5. However, the popularity of ChatGPT sparked massive interest in the potential of generative AI and many businesses are deploying it across the enterprise. AI technology is now in the wild—and it’s moving faster than any other technology I’ve seen.

There are several compelling use cases for generative AI in the enterprise:

  • Content Creation: Tools such as ChatGPT can assist content creators in generating ideas, outlines, and drafts—potentially saving individuals and teams significant time and effort.
  • Learning and Education: Properly trained AI tools can be used to quickly understand new and complex subjects by summarizing large amounts of information, answering questions, and explaining complicated concepts in simple language.
  • Coding Support: Tools like GitHub Copilot1 and OpenAI’s API2 Service can help devs write code more efficiently and identify errors for queries.
  • Product and Operations Support: Tools can be used to more efficiently prepare common reports and notices, such as bug resolutions.

Issues and Challenges

However, there are challenges to overcome. One is the question of whether using AI at all will run afoul of laws and regulations in international markets. Earlier this year OpenAI temporarily blocked the use of ChatGPT in Italy after the Italian Data Protection Authority accused it of unlawfully collecting user data. German regulators are looking at whether ChatGPT adheres to the European General Data Protection Regulation (GDPR). In May, the European Parliament took a step closer to issuing the first rules on use of Artificial Intelligence.3

Another challenge are the issues around data collection and the accidental disclosure of personal or proprietary information. Companies need to secure their confidential information against, and ensure they aren’t plagiarizing from, other companies and individuals who are using the same tools they are. We’ve already seen reports of intellectual property being entered into public generative AI systems, which could impact a company’s ability to defend its patents. One AI-powered transcription and note-taking service makes copies of any materials that are presented in Zoom calls that it monitors.

The third major challenge is the threat of enhanced cyberattacks. AI-powered cyberattack software could try many possible approaches, learn from how we respond to each, and quickly adjust its tactics to devise an optimal strategy—all at a speed much faster than any human attacker. We have seen new sophisticated phishing attacks that are utilizing AI, including impersonating individuals both in writing and in speech. An AI tool called PassGAN, short for Password Generative Adversarial Network, has been found to crack passwords faster and more efficiently than traditional methods.

CISOs and AI

As CISOs, our job isn’t to say no to new technology. We ask questions, and we provide guidance to help leaders create an organizational strategy. A good AI strategy provides guidelines for use and takes into account legal, ethical, and operational considerations.

When used responsibly and with proper governance frameworks in place, generative AI can provide businesses with many advantages ranging from automated processes to optimization solutions. Let’s look at some things you need to think about, and what actions you might need to take, based on generative AI’s risk posture.

"A good AI strategy takes into account its legal, ethical, and operational considerations."

Creating a Comprehensive AI Strategy

With new technologies such as generative AI, come opportunities; but they also come with risks that we have a responsibility to consider and manage on behalf of the company and our customers. Issuing a policy on use of AI will ensure all employees understand and adhere to secure, legal and ethical use of AI applications and tools. A comprehensive AI strategy ensures privacy, security, and compliance, and needs to consider:

  • The use cases where AI can provide the most benefit.
  • The necessary resources to implement AI successfully.
  • A governance framework to manage the safety of customer data and ensure compliance with regulations and copyright laws in every country where you do business.
  • Evaluating the impact of AI implementation on employees and customers.

Once your organization has assessed and prioritized use cases for generative AI, a governance framework needs to be established for AI services such as ChatGPT. Components of this framework will include setting up rules for data collection and retention as well as establishing policies regarding access control and encryption methods. For in-house AI systems, policies must be created to mitigate the risk of bias, anticipate ways the systems can be abused, and mitigate the harm they can do if used improperly.

Your company’s AI strategy should also cover how changes brought about by AI automation will affect employees and customers. Employee training initiatives can help ensure that everyone understands how these new technologies are changing day-to-day processes and how threat actors may already be using them to further increase the efficacy of their social engineering attacks. Customer experience teams should assess how changes resulting from AI implementation might impact customer service delivery so that they can adjust accordingly.

AI and Security

A process for establishing and maintaining strong AI security standards is vital. Your existing security policies already govern the deployment and use of applications in your organization, and AI is fundamentally no different than any other software tool you would use. What you need are guardrails that are specific to how AI functions—for example, which AI service it pulls content from and what it does with whatever information, possibly confidential, that you feed into it.

AI tools will need to be designed with adversarial robustness in mind. We currently see this happening in the lab to improve training; but doing this in the “real” world, against an unknown enemy, must be top-of-mind—especially in military and critical infrastructure scenarios.

With attackers looking closely at AI, your organization needs to plan and prepare their defense right now. Here are a few practices to consider:

  1. Ensure you analyze your software code for bugs, malware, and behavioral anomalies. Signature “scans” only look for what is known, and these new attacks will leverage unknown techniques and tools.
  2. When monitoring your logs, use AI to fight AI. Machine Learning security log analysis is a great way to search for patterns and anomalies. It can incorporate endless variables to search for and produce predictive intelligence, which in turn provides predictive actions.
  3. Update your cybersecurity training to reflect new threats such as AI-powered phishing, and your cybersecurity policies to counter the new AI password cracking tools.
  4. Continue to monitor new uses of AI, including generative AI, to stay ahead of emerging risks.

These steps are critical to building trust with your employees, partners, and customers about whether you’re properly safeguarding their data.

Preparing for the Future

To stay competitive, it’s essential for organizations to adopt AI technology while safeguarding against potential risks. By taking these steps now, companies can ensure that they’re able to reap the full benefits of AI while minimizing exposure.

Join the Discussion
Authors & Contributors
Gail Coury (Author)
CISO
Footnotes

1https://github.com/features/copilot

2https://platform.openai.com/overview

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

Read More from F5 Labs

2023 Identity Threat Report: The Unpatchables
2023 Identity Threat Report: The Unpatchables
11/01/2023 report 80 min. read
Sensor Intel Series: Top CVEs in February 2024
Sensor Intel Series: Top CVEs in February 2024
03/28/2024 article 5 min. read
2024 Bad Bots Review
2024 Bad Bots Review
03/14/2024 article 15 min. read