Virtualizing the Gi LAN – Do’s and Don’ts

F5 Ecosystem | February 15, 2017

After many years of discussions and proof of concepts, network functions virtualization (NFV) is now moving away from the conceptual to the realization stage.

Some service providers have started rolling out NFV infrastructure platforms while some of these have already enabled their first NFV-enabled use cases and applications on top of these platforms.

Virtualizing the evolved packet core (EPC) and the Gi LAN is one of those use cases that has a lot of industry traction at the moment.

When deploying a virtual Gi LAN there are two fundamental questions that need to get answered. The first question is about consolidation: do we consolidate several Gi LAN functions in a single virtual network function (VNF) or do we completely decompose the Gi LAN architecture into many discrete VNFs?

The second question is about scale. How do we effectively scale out this architecture, as a single virtual machine may not provide sufficient capacity to handle the Gi LAN workload?

In order to secure and monetize their networks, mobile operators have been deploying many different technologies on the Gi LAN, including but not limited to TCP optimization, video optimization, header enrichment, deep packet inspection, Gi-firewalling and carrier-grade NAT (CGNAT).

Historically many of these functions were deployed on different platforms, often from different vendors. In the last few years, mobile operators have been consolidating and simplifying their Gi LAN architecture.

Through consolidation, mobile operators have been able to significantly reduce their total cost of ownership. When migrating this architecture to an NFV platform – which relies on common off-the-shelf hardware – there may be a temptation to start decomposing the architecture into different discrete VNFs, each dedicated to a single function.

This decomposed architecture certainly makes sense if the different Gi LAN functions are each applicable to a very specific subset of flows. Based on business policies via interaction with a Policy and Charging Rules Function (PCRF), an intelligent service classification engine that would sit at the entry of the Gi LAN can determine which flows need to be steered and/or service-chained to particular functions that are deployed as separate VNFs. Those VNFs will then only have to process the traffic that they need to act upon, which results in a very efficient distribution of traffic across all the VNFs.

However, if Gi LAN functions need to be applied to almost all traffic then this decomposition of functions doesn't bring any value. Take TCP optimization and Gi firewalls as examples. Almost all Gi LAN traffic has to be processed by these two functions, so having them consolidated in a single VNF results in efficiency gains similar to a physical deployment.

Indeed, there is no need for an intelligent service classifier here to make a VNF selection and there is no hair-pinning in and out of the SDN layer to get traffic from one VNF to the next. Furthermore, adding a firewall function on top of a TCP optimization function adds very little CPU overhead, so the value of consolidation still applies in a NFV environment.

Another challenge is how to deal with workflows that are larger than what a single VNF can handle. Some vendors have taken the approach to allow a VNF to consist of many virtual machines (VMs). Most functions on the Gi LAN are stateful operations, which means the ingress and egress traffic for the same flows need to be processed on the same VM. As a result, if egress traffic arrives on a different VM that processed the ingress traffic, inter-VM communication is required to get traffic back to the right VM. This will obviously result in performance degradation.

At F5, we have chosen the approach by which a VNF is always deployed as a single VM and the scaling out of the architecture to multiple VMs becomes an external design factor. Different scale-out techniques are available, ranging from very simple to highly advanced.

A very simple scale-out architecture is based on IP-based traffic hashing across the different VMs such as equal cost multipath (ECMP). However, this technique has several drawbacks and is impractical for the majority of use cases on the Gi LAN. SDN-based approaches to control the distribution of traffic can avoid some of those ECMP limitations but the capabilities would still be somewhat limited and are very dependent on the chosen SDN player.

By far the most flexible and advanced approach to scale out any Gi LAN function that is totally independent from the underlying network and/or SDN, is provided by a stateless software-based load balancer (which is also deployed as VMs). The fact that it is stateless allows it to scale out almost indefinitely, as more VMs for stateless load balancing can be added without losing consistency of how traffic is distributed across the VMs providing the Gi LAN functions.

Based on the above, we believe it is very important for any mobile operator to carefully think about both the consolidation and the scale-out aspects when migrating their Gi LAN architecture from the physical to the virtual layer.

Share

About the Author

Bart Salaets
Bart SalaetsField Chief Technology Officer

More blogs by Bart Salaets

Related Blog Posts

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5 Ecosystem | 10/28/2025

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture

F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.

F5 Silverline Mitigates Record-Breaking DDoS Attacks
F5 Ecosystem | 08/26/2021

F5 Silverline Mitigates Record-Breaking DDoS Attacks

Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.

F5 Silverline: Our Data Centers are your Data Centers
F5 Ecosystem | 06/22/2021

F5 Silverline: Our Data Centers are your Data Centers

Customers count on F5 Silverline Managed Security Services to secure their digital assets, and in order for us to deliver a highly dependable service at global scale we host our infrastructure in the most reliable and well-connected locations in the world. And when F5 needs reliable and well-connected locations, we turn to Equinix, a leading provider of digital infrastructure.

Volterra and the Power of the Distributed Cloud (Video)
F5 Ecosystem | 04/15/2021

Volterra and the Power of the Distributed Cloud (Video)

How can organizations fully harness the power of multi-cloud and edge computing? VPs Mark Weiner and James Feger join the DevCentral team for a video discussion on how F5 and Volterra can help.

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
F5 Ecosystem | 12/08/2020

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies

David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.

The Internet of (Increasingly Scary) Things
F5 Ecosystem | 12/16/2015

The Internet of (Increasingly Scary) Things

There is a lot of FUD (Fear, Uncertainty, and Doubt) that gets attached to any emerging technology trend, particularly when it involves vast legions of consumers eager to participate. And while it’s easy enough to shrug off the paranoia that bots...

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us