How Telcos Can Take Generative AI to the Next Level

F5 Ecosystem | February 20, 2024

Over the past 18 months, generative AI (GenAI) has taken the world by storm.

New services, such as ChatGPT and DALL-E, can generate text, images, and software code in response to natural language prompts from users.

New levels of productivity are now possible and, according to recent research by Bloomberg Intelligence, the GenAI market could be worth as much as USD$1.3 trillion by 2032.

With the value of this technology now vividly apparent, we're starting to see a growing drive to create industry- and region-specific versions of the Large Language Models (LLMs) that enable computers to generate credible text and other content.

LLMs are statistical language models trained on a massive amount of data. They can be used to generate and translate text and other content, as well as perform natural language processing tasks. LLMs are typically based on deep-learning architectures.

Across the world, pioneering telecoms operators are already gearing up to play a major role in the delivery and security of these specialist LLMs. In particular, they are anticipating strong demand for end-to-end GenAI solutions from enterprises, start-ups, universities, and public administrations that can’t afford to build the necessary computing infrastructure themselves.

It is an eye-catching trend and, with appropriate security safeguards, LLM-as-a-service solutions could soon be used to develop specific GenAI applications for healthcare, education, transport, and other key sectors (including telecoms).

Challenges

So, what are the next steps to make it all work, and what are the some of the key challenges that lie ahead?

As they need to be very responsive, highly reliable and always available, many LLMs will likely be distributed across multiple clouds and network edge locations.

Indeed, with the appropriate latency, GenAI will be integral to telcos’ edge propositions as users will need real-time “conversational” responses.

For telcos that have been struggling to grow revenue, delivering edge infrastructure to support specialist GenAI systems could be a major new market. Bloomberg Intelligence estimates that the GenAI infrastructure-as-a-service market (used for training LLMs) will be worth US$247 billion by 2032.

Nevertheless, those hoping to hit the GenAI jackpot need to tread carefully.

Distributed architectures, which can increase the potential attack surface, call for robust and scalable security solutions to prevent data and personally identifiable information leaking—both in the AI training and inference phases.

As bad actors increasingly employ lateral movement techniques to span multiple interconnected systems, it is critical that telcos secure both the apps and the APIs third parties will use to access the LLM-as-a-service. To help raise awareness on this front, the Open Source Foundation for Application Security (OWASP) recently started a new project to educate developers, designers, architects, managers and organizations about the potential security risks when deploying and managing LLMs.

One this is certain: telcos need to maintain the customer trust required to become credible players in this market, as GenAI systems will often need to process personally or commercially sensitive data. For that reason, many governments and regulators are keen that these systems run on compute capacity located within their jurisdictions. Meanwhile, enterprises are also reluctant to share sensitive data that may threaten their intellectual property and, therefore, prefer to use private LLM offers.

Other issues of note include the way AI clusters act as virtual user communities, which requires high-performance data paths to access data residing in the private repositories of countries and enterprises.

Furthermore, AI's impact on network traffic and infrastructure will be increasingly influenced by plans from both countries and enterprises to self-host AI apps. Concerns about hallucinations, copyright, security, as well as the environmental impacts of AI, are driving many to seek further security and control over data. In addition, they will need new ways to mitigate the anticipated strain on GPUs. All these considerations impact the overall TCO of AI infrastructures.

Enter telcos: flexible and scalable protection across multiple environments

Telcos can play a major role in the AI revolution. They own national infrastructures, have an existing B2B offer, and are a natural option to become providers of AI-as-a-service.

As a case in point, F5 is already helping a telco in Europe to secure its new GenAI proposition. In this instance, our customer is using Nvidia DGX Super POD and Nvidia AI Enterprise technologies to develop the first LLM-trained natively in a local language. The goal is to capture the nuances of the language, as well as the specifics of its grammar, context and cultural identity.

To secure the solution across multiple edge sites, the telco will leverage F5 Distributed Cloud Web Application and API Protection (WAAP), provided as a cloud-based service. They are also harnessing F5’s ADC clusters to perform load balancing for the new AI platform across its edge infrastructure.

Crucially, F5’s solutions can be employed across public cloud and multi-tenancy data centres, as well as in-house and for their edge infrastructure.

What's more, F5 Distributed Cloud WAAP, and associated API security solutions, can rapidly scale as traffic increases, reducing the overall cost of delivering the LLM-as-a-service. F5 also provides the visibility of traffic flow, latency, and response times telcos and other managed service providers will need to provide enterprise customers with service level agreements.

Another way F5 can help is by dealing with the fact that LLM inference and AI tasks are notorious for requiring a lot of resources. These workloads call for extensive data exchanges, and often result in bottlenecks due to the need for secure data exchanges at scale. This can result in a lower utilization of valuable resources, which leads to increased operational costs and delays to desired outcomes.

If they play their cards right, and are able to smartly leverage scalable and robust security solutions, telcos have everything it takes to become trusted providers of industry- and nation-specific LLMs. Those that succeed will undoubtedly gain a major competitive edge in the years ahead.

Keen to learn more? Book a meeting with F5 at Mobile World Congress in Barcelona from 26 February – 29 February (Hall 5, Stand 5C60)

Share

Related Blog Posts

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5 Ecosystem | 10/28/2025

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture

F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.

F5 Silverline Mitigates Record-Breaking DDoS Attacks
F5 Ecosystem | 08/26/2021

F5 Silverline Mitigates Record-Breaking DDoS Attacks

Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.

F5 Silverline: Our Data Centers are your Data Centers
F5 Ecosystem | 06/22/2021

F5 Silverline: Our Data Centers are your Data Centers

Customers count on F5 Silverline Managed Security Services to secure their digital assets, and in order for us to deliver a highly dependable service at global scale we host our infrastructure in the most reliable and well-connected locations in the world. And when F5 needs reliable and well-connected locations, we turn to Equinix, a leading provider of digital infrastructure.

Volterra and the Power of the Distributed Cloud (Video)
F5 Ecosystem | 04/15/2021

Volterra and the Power of the Distributed Cloud (Video)

How can organizations fully harness the power of multi-cloud and edge computing? VPs Mark Weiner and James Feger join the DevCentral team for a video discussion on how F5 and Volterra can help.

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
F5 Ecosystem | 12/08/2020

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies

David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.

The Internet of (Increasingly Scary) Things
F5 Ecosystem | 12/16/2015

The Internet of (Increasingly Scary) Things

There is a lot of FUD (Fear, Uncertainty, and Doubt) that gets attached to any emerging technology trend, particularly when it involves vast legions of consumers eager to participate. And while it’s easy enough to shrug off the paranoia that bots...

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
How Telcos Can Take Generative AI to the Next Level | F5