I’m thrilled to introduce my new podcast series, The Global CISO: For defenders by defenders. I’ve been a CISO for over 20 years and now am Field CISO at F5. Experience is the best teacher, but the tuition cost can be high. That's why I’m launching this podcast series to discuss how CISOs and CIOs can better navigate our world’s incredible pace of technical change and be ready for the information security challenges of the future.
F5 supports over 23,000 enterprise customers in more than 170 countries, including the largest banks, auto manufacturers, and telecommunications providers in the world. I traveled around 300,000 miles last year, working with some of the most complex organizations in the world, and it's from this vantage point that this podcast series examines today’s most important information security, compliance, and risk management trends. I’m going to talk about what works and what doesn't, and how CISOs and CIOs can harness the power of the AI revolution to thrive in a fast-changing world.
Here are some of the takeaways from my first episode:
When it comes to global leadership in AI, the U.S. and China are competing for dominance across the world and influence across the global south. However, both nations are building ideological constraints into their AI stacks. The U.S. AI Action Plan has a geopolitical theme driving its model that pushes the U.S. National Institute of Standards and Technology (NIST) to remove subjects including climate change, DEI, or misinformation from the NIST risk management framework. The official Chinese AI stack, which is marketed as open source (really open weight) and readily available to the global south, is built to maintain fidelity to socialist principles.
I believe that the ideological elements within these AI models and the stacks that they're advancing may hamper the adoption of either of them. For example, will actuaries in the insurance industry want to rely on the U.S. AI models for property casualty evaluation and forecasts if they don’t incorporate data about climate change? How does adherence to socialist principles provide value if you are using the Chinese AI models to manage financial market analyses? I think a lot of global organizations will want to steer clear of AI models founded on ideological purity tilts.
Look to the EU as a potential dark horse to benefit from the AI ideological rivalry between the U.S. and China. The EU is not necessarily the world’s greatest AI innovator, but European tech companies do some great work, and France has some of the best mathematicians on the planet. The EU AI approach attempts to stay away from ideological bias, instead focusing on risk management, risk acceptance, and heavy penalties for companies that deploy AI in ways deemed irresponsible. So don't count the EU out just yet.
Key takeaway: Tech leaders must be very careful and thoughtful about what AI models and technology stacks they adopt, because these early decisions will have long-ranging consequences and may lead to high exit costs. Regardless, expect a continued push for adoption of open models across the board.
Sam Altman, CEO of OpenAI, was recently on stage talking about a dramatic rise in fraud perpetuated by AI systems. That is consistent with our experience at F5 as well. We've stated that most automated Turing tests and CAPTCHA quizzes have been irrelevant for a long time, and they’re especially ineffective against advanced bots. But I think defenders have been taken off guard by seeing how casually a ChatGPT agent can bypass CAPTCHA restrictions.
We are also seeing a lot of voice fraud, where AI agents now have very convincing local accents and sound just like they live next door. We've seen roughly a 2,400% increase in fraud perpetuated by AI using voicemail and SMS phishing, social engineering, deep fakes, and increasingly convincing spam emails that target human gullibility.
Brace yourselves, because we're going to see a lot more AI-enabled fraud. That's where the money is, and that's where the criminals are headed.
Key takeaway: AI is supercharging fraud, and traditional defenses like CAPTCHAs are obsolete. CISOs must prioritize employee training and adaptive, AI-driven security strategies to counter increasingly convincing AI-enhanced scams across voice, text, and social media channels.
CISOs already have a lot of responsibilities. We’re accountable for protecting and securing data, email, applications, perimeters, the edge, networks, and devices, and now we're going to need to take on inference security as well.
That’s because inference is the new attack surface. Inference is now tied directly to decisions that affect business, finance, compliance, and reputation, and adversaries can exploit inference engines through various attacks, including prompt injection and model manipulation. CISOs must now secure not just the infrastructure around AI, but also the AI decision-making itself, because inference is both a new attack vector and a critical business function.
One new method of inference manipulation is an “echo chamber” attack. This is a form of prompt injection that involves crafting a series of prompts that gradually suggest an unauthorized response without directly requesting an unauthorized generation. It’s kind of like gaslighting the large language model (LLM) through small suggestions over a multi-turn series of prompts that lead the models to jailbreak themselves or bypass their controls.
These echo chamber attacks are around 80% to 90% effective. So, is it now the CISO’s job to secure inference? If not, who else in the organization is going to take on that responsibility?
Key takeaway: Inference is emerging as a critical new attack surface, and CISOs must extend their mandate to secure AI decision-making itself. In my view, if you have “Chief” and “Security” in your title, it’s going to become your problem, whether it’s recognized today or not. May as well get on it.
I think they actually may have the edge in AI, though it all depends on what you measure. I spend a lot of time in Asia and have talked to a lot of folks who live and work there. It seems to me that China has an early advantage in actually putting AI to use in meaningful ways. Perhaps China doesn’t have the most massive models ever developed, but the models it has are actually in use, solving supply chain problems, rerouting shipments, and other useful real-world functions.
I think it's fine to have the greatest 200-quadrillion-parameter model, if that's what you want to build. But right now, in the U.S. and in Europe, I see a pretty substantial gap—to use a drag racing analogy—between building all that power and putting that power to the ground.
The U.S. AI Action Plan is heavily prioritizing energy development, building AI factories and AI data centers. Investors in this space are throwing money at it hand over fist, but we don’t seem to be delivering real-world value. Reference recent headline-making studies about how few AI programs are delivering business value at this stage in AI adoption, which is a natural segue into the Trough of Disillusionment phase of new tech adoption.
Part of the problem here is that simply because the U.S. has the biggest, most powerful AI models doesn’t mean we’re using them to drive meaningful business outcomes, and there are very few moats in AI. Having the biggest model creates a time-boxed competitive edge that won’t last forever.
Key takeaway: China may hold an edge in AI broadly speaking, not by building the biggest models, but by rapidly applying them to real-world problems—while the U.S. and Europe risk falling behind by focusing more on scale than on practical impact. China also has much more power capacity available than the U.S., and that is not a quick problem for America to solve. There’s no choice but to drill, baby, drill, which we all know isn’t sustainable, but this is the “logic” of global race conditions.
We want The Global CISO to be your go-to source for staying ahead of the threats and trends shaping our industry and our future. So, if there are topics you think I should cover, please send me a note or connect with me on LinkedIn.
Subscribe now to catch all the episodes of The Global CISO: For defenders by defenders on your podcast platform of choice—and share them with your team, your peers, and your network.
Tune in to my first episode, “Taming APIs, AI security, and PQC."
Also, be sure to listen to my second episode, “Automate first: The CTO playbook for APIs, observability, and vendor accountability.”