Get Me to the Cluster…with BGP?

NGINX | February 28, 2023

Creating and managing a robust Kubernetes environment demands smooth collaboration between your Network and Application teams. But their priorities and working styles are usually quite different, leading to conflicts with potentially serious consequences – slow app development, delayed deployment, and even network downtime.

Only the success of both teams, working towards a common goal, can ensure today’s modern applications are delivered on time with proper security and scalability. So, how do you leverage the skills and expertise of each team, while helping them work in tandem?

In our whitepaper Get Me to the Cluster, we detail a solution for enabling external access to Kubernetes services that enables Network and Application teams to combine their strengths without conflict.

How to Expose Apps in Kubernetes Clusters

The solution works specifically for Kubernetes clusters hosted on premises, with nodes running on bare metal or traditional Linux virtual machines (VMs) and standard Layer 2 switches and Layer 3 routers providing the networking for communication in the data center. It doesn’t extend to cloud‑hosted Kubernetes clusters, because cloud providers don’t allow us to control the core networking in their data centers nor the networking in their managed Kubernetes environment.

Diagram of Kubernetes clusters hosted on premises, with nodes and standard Layer 2 switches and Layer 3 routers providing the networking for communication in the data center.

Before we go over the specifics of our solution, let’s review why other standard ways to expose applications in a Kubernetes cluster don’t work for on‑premises deployments:

  • Service – Groups together pods running the same apps. This is great for internal pod-to-pod communication, but is only visible inside the cluster, so it doesn’t help expose apps externally.
  • NodePort – Opens a specific port on every node in the cluster and forwards traffic to the corresponding app. While this allows external users to access the service, it’s not ideal because the configuration is static and you have to use high‑numbered TCP ports (instead of well‑known lower port numbers) and coordinate port numbers with other apps. You also can’t share common TCP ports among different apps.
  • LoadBalancer – Uses the NodePort definitions on each node to create a network path from the outside world to your Kubernetes nodes. It’s great for cloud‑hosted Kubernetes, because AWS, Google Cloud Platform, Microsoft Azure and most other cloud providers support it as an easily configured feature that works well and provides the required public IP address and matching DNS A record for a service. Unfortunately, there’s no equivalent for on‑premises clusters.

Enabling External User Access to On‑Premises Kubernetes Clusters

That leaves us with the Kubernetes Ingress object, which is specifically designed for traffic that flows from users outside the cluster to pods inside the cluster (north‑south traffic). The Ingress creates an external HTTP/HTTPS entry point for the cluster – a single IP address or DNS name at which external users can access multiple services. This is just what’s needed! The Ingress object is implemented by an Ingress controller – in our solution the enterprise‑grade F5 NGINX Ingress Controller based on NGINX Plus.

It might surprise you that another key component of the solution is Border Gateway Protocol (BGP), a Layer 3 routing protocol. But a great solution doesn’t have to be complex!

The solution outlined in Get Me to the Cluster actually has four components:

  1. iBGP network – Internal BGP (iBGP) is used to exchange routing information within an autonomous system (AS) in the data center and helps ensure the network is reliable and scalable. iBGP is already in place and supported by the Network team in most data centers.
  2. Project Calico CNI networking – Project Calico is an open source networking solution that flexibly connects environments in on‑premises data centers while giving fine‑grained control over traffic flow. We use the CNI plug‑in from Project Calico for networking in the Kubernetes cluster, with BGP enabled. This allows you to control IP address pools allocated for pods, which helps to quickly identify any networking issues.
  3. NGINX Ingress Controller based on NGINX Plus – With NGINX Ingress Controller you can watch the service endpoint IP addresses of the pods and automatically reconfigure the list of upstream services with no interruption of traffic processing. Application teams can also take advantage of the many other enterprise‑grade Layer 7 HTTP features in NGINX Plus, including active health checks, mTLS, and JWT‑based authentication.
  4. NGINX Plus as a reverse proxy at the edge – NGINX Plus sits as a reverse proxy at the edge of the Kubernetes cluster, providing a path between switches and routers in the data center and the internal network in the Kubernetes cluster. This functions as replacement for the Kubernetes LoadBalancer object and uses Quagga for BGP.

The diagram illustrates the solution architecture, indicating which protocols the solution components use to communicate, not the order in which data is exchanged during request processing.

Diagram illustrating the solution architecture, indicating which protocols the solution components use to communicate

Download the Whitepaper for Free

By working together to implement a solution with well‑defined components, Network and Application teams can easily deliver optimal performance and reliability.

Our solution uses modern networking tools, protocols, and existing architectures. Because it is designed to be inexpensive and easy to implement, manage, and support, it adds ease and builds bridges between your teams.

To see the code in action and learn step-by-step how to deploy our solution, download Get Me to the Cluster for free.


Share

Related Blog Posts

Automating Certificate Management in a Kubernetes Environment
NGINX | 10/05/2022

Automating Certificate Management in a Kubernetes Environment

Simplify cert management by providing unique, automatically renewed and updated certificates to your endpoints.

Secure Your API Gateway with NGINX App Protect WAF
NGINX | 05/26/2022

Secure Your API Gateway with NGINX App Protect WAF

As monoliths move to microservices, applications are developed faster than ever. Speed is necessary to stay competitive and APIs sit at the front of these rapid modernization efforts. But the popularity of APIs for application modernization has significant implications for app security.

How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh
NGINX | 12/09/2021

How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh

When you need an API gateway in Kubernetes, how do you choose among API gateway vs. Ingress controller vs. service mesh? We guide you through the decision, with sample scenarios for north-south and east-west API traffic, plus use cases where an API gateway is the right tool.

Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services
NGINX | 01/20/2021

Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services

In the second post in our API gateway series, Liam shows you how to batten down the hatches on your API services. You can use rate limiting, access restrictions, request size limits, and request body validation to frustrate illegitimate or overly burdensome requests.

New Joomla Exploit CVE-2015-8562
NGINX | 12/15/2015

New Joomla Exploit CVE-2015-8562

Read about the new zero day exploit in Joomla and see the NGINX configuration for how to apply a fix in NGINX or NGINX Plus.

Why Do I See “Welcome to nginx!” on My Favorite Website?
NGINX | 01/01/2014

Why Do I See “Welcome to nginx!” on My Favorite Website?

The ‘Welcome to NGINX!’ page is presented when NGINX web server software is installed on a computer but has not finished configuring

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us