One of the ways a service mesh can actually make it more complicated to manage a Kubernetes environment is when it must be configured separately from the Ingress controller. Separate configurations aren’t just time‑consuming, either. They increase the probability of configuration errors that can prevent proper traffic routing and even lead to security vulnerabilities (like bad actors gaining access to restricted apps) and poor experiences (like customers not being able to access apps they’re authorized for). Beyond the time it takes to perform separate configurations, you end up spending more time troubleshooting errors.
You can avoid these problems – and save time – by integrating the NGINX Plus-based NGINX Ingress Controller with NGINX Service Mesh to control both ingress and egress mTLS traffic. In this video demo, we cover the complete steps.
Supporting documentation is referenced in the following sections:
- Prerequisites
- Deploying NGINX Ingress Controller with NGINX Service Mesh
- Using a Standard Kubernetes Ingress Resource to Expose the App
- Using an NGINX VirtualServer Resource to Expose the App
- Configuring a Secure Egress Route with NGINX Ingress Controller
Prerequisites (0:18)
Before starting the actual demo, we performed these prerequisites:
- Installed the NGINX Server Mesh control plane in the Kubernetes cluster and set up mTLS and the
strictpolicy for the service mesh. - Installed NGINX Plus-based NGINX Ingress Controller as a Deployment (rather than a DaemonSet) in the Kubernetes cluster, enabled egress, and exposed it as a service of type
LoadBalancer.Note: The demo does not work with the NGINX Open Source‑based NGINX Ingress Controller. For ease of reading, we refer to the NGINX Plus-based NGINX Ingress Controller as simply “NGINX Ingress Controller” in the remainder of this blog. - Followed our instructions to download the sample
bookinfoapp, inject the NGINX Service Mesh sidecar, and deploy the app.
As a result of the strict policy created in Step 1, requests to the bookinfo app from clients outside the mesh are denied at the sidecar. We illustrate this in the demo by first running the following command to set up port forwarding:
When we try to access the app, we get status code 503 because our local machine is not part of the service mesh:
Deploying NGINX Ingress Controller with NGINX Service Mesh (1:50)
The first stage in the process of exposing an app is to deploy an NGINX Ingress Controller instance. Corresponding instructions are provided in our tutorial, Deploy with NGINX Plus Ingress Controller for Kubernetes.
NGINX provides both Deployment and DaemonSet manifests for this purpose. In the demo, we use the Deployment manifest, nginx-plus-ingress.yaml. It includes annotations to route both ingress and egress traffic through the same NGINX Ingress Controller instance:
Loading gist…
The manifest enables direct integration of NGINX Ingress Controller with Spire, the certificate authority (CA) for NGINX Service Mesh, eliminating the need to inject the NGINX Service Mesh sidecar into NGINX Ingress Controller. Instead, NGINX Ingress Controller fetches certificates and keys directly from the Spire CA to use for mTLS with the pods in the mesh. The manifest specifies the Spire agent address:
Loading gist…
and mounts the Spire agent UNIX socket to the NGINX Ingress Controller pod:
Loading gist…
The final thing to note about the manifest is the -enable-internal-routes CLI argument, which enables us to route to egress services:
Loading gist…
Before beginning the demo, we ran the kubectl apply -f nginx-plus-ingress.yaml command to install NGINX Ingress Controller, and at this point we inspect the deployment in the nginx-ingress namespace. As shown in the READY column of the following output, there is only one container for the NGINX Ingress Controller pod, because we haven’t injected it with an NGINX Service Mesh sidecar.
We’ve also deployed a service of type LoadBalancer to expose the external IP address of the NGINX Ingress Controller (here, 35.233.133.188) outside of the cluster. We’ll access the sample bookinfo application at that IP address.
Using a Standard Kubernetes Ingress Resource to Expose the App (3:55)
Now we expose the bookinfo app in the mesh, using a standard Kubernetes Ingress resource as defined in bookinfo-ingress.yaml. Corresponding instructions are provided in our tutorial, Expose an Application with NGINX Plus Ingress Controller.
Loading gist…
The resource references a Kubernetes Secret for the bookinfo app on line 10 and includes a routing rule which specifies that requests for bookinfo.example.com are sent to the productpage service (lines 11–18). The Secret is defined in bookinfo-secret.yaml:
Loading gist…
We run this command to load the key and certificate, which in the demo is self-signed:
We activate the Ingress resource:
and verify that Ingress Controller added the route defined in the resource, as confirmed by the event at the end of the output:
In the demo we now use a browser to access the bookinfo app at https://bookinfo.example.com/. (We have previously added a mapping in the local /etc/hosts file between the IP address of the Ingress Controller service – 35.233.133.188 in the demo, as noted above – and bookinfo.example.com. For instructions, see the documentation.) The info in the Book Reviews section of the page changes periodically as requests rotate through the three versions of the reviews service defined in bookinfo.yaml (download).
We next inspect the ingress traffic into the clusters. We run the generate-traffic.sh script to make requests to the productpage service via the NGINX Ingress Controller’s public IP address, and then run the nginx-meshctl top command to monitor the traffic:
Using an NGINX VirtualServer Resource to Expose the App (6:45)
We next show an alternative way to expose an app, using an NGINX VirtualServer resource. It’s a custom NGINX Ingress Controller resource that supports more complex traffic handling, such as traffic splitting and content‑based routing.
First we delete the standard Ingress resource:
Our bookinfo-vs.yaml file configures mTLS with the same Secret as in bookinfo-ingress.yaml (lines 7–8). Lines 9–12 define the productpage service as the upstream, and lines 13–24 a route that sends all GET requests made at bookinfo.example.com to that upstream. For HTTP methods other than GET, it returns status code 405.
Loading gist…
We apply the resource:
We then perform the same steps as with the Ingress resource – running the kubectl describe command to confirm correct deployment and accessing the app in a browser. Another confirmation that the app is working correctly is that it rejects the POST method:
Configuring a Secure Egress Route with NGINX Ingress Controller (8:44)
Now we show how to route egress traffic through NGINX Ingress Controller. Our tutorial Configure a Secure Egress Route with NGINX Plus Ingress Controller covers the process, using different sample apps.
We’ve already defined a simple bash pod in bash.yaml and deployed it in the default namespace from we’re sending requests. As shown in the READY column of this output, it has been injected with the NGINX Service Mesh sidecar.
There are several use cases where you might want to enable requests from within the pod to an egress service, which is any entity that’s not part of NGINX Service Mesh. Examples are services deployed:
- Outside the cluster
- On another cluster
- On the same cluster, but not injected with the NGINX Service Mesh sidecar
In the demo, we’re considering the final use case. We have an application deployed in the legacy namespace, which isn’t controlled by NGINX Service Mesh and where automatic injection of the NGINX Service Mesh sidecar is disabled. There’s only one pod running for the app.
Remember that we’ve configured a strict mTLS policy for NGINX Service Mesh; as a result we can’t send requests directly from the bash pod to the target service, because the two cannot authenticate with each other. When we try, we get status code 503 as illustrated here:
The solution is to enable the bash pod to send egress traffic through NGINX Ingress Controller. We uncomment the annotation on lines 14–15 of bash.yaml:
Loading gist…
Then we apply the new configuration:
and verify that a new bash pod has spun up:
Now when we run the same kubectl exec command as before, to send a request from the bash pod to the target service, we get status code 404 instead of 503. This indicates that the bash pod has successfully sent the request to NGINX Ingress Controller, but the latter doesn’t know where to forward it because no route is defined.
We create the required route in with the following Ingress resource definition in legacy-route.yaml. The internal-route annotation on line 7 means that the target service is not exposed to the Internet, but only to workloads within NGINX Service Mesh.
Loading gist…
We activate the new resource and confirm that NGINX Ingress Controller added the route defined in the resource:
Now when we run the kubectl exec command, we reach the target service:
An advantage of routing egress traffic through NGINX Ingress Controller is that you can control exactly which external services can be reached from inside the cluster – it’s only the ones for which you define a route.
One final thing we show in the demo is how to monitor egress traffic. We run the kubectl exec command to send several requests, and then run this command:
Say “No” to Latency – Try NGINX Service Mesh with NGINX Ingress Controller
Many service meshes offer ingress and egress gateway options, but we think you’ll appreciate an added benefit of the NGINX integration: lower latency. Most meshes require a sidecar to be injected into the Ingress controller, which requires traffic to make an extra hop on its way to your apps. Seconds matter, and that extra hop slowing down your digital experiences might cause customers to turn elsewhere. NGINX Service Mesh doesn’t add unnecessary latency because it doesn’t inject a sidecar into NGINX Ingress Controller. Instead, by integrating directly with Spire, the CA of the mesh, NGINX Ingress Controller becomes part of NGINX Service Mesh. NGINX Ingress Controller simply fetches certificates and keys from the Spire agent and uses them to participate in the mTLS cert exchange with meshed pods.
There are two versions of NGINX Ingress Controller for Kubernetes: NGINX Open Source and NGINX Plus. To deploy NGINX Ingress Controller with NGINX Service Mesh as described in this blog, you must use the NGINX Plus version, which is available for a free 30-day trial.
NGINX Service Mesh is completely free and available for immediate download and can be deployed in less than 10 minutes! To get started, check out the docs and let us know how it goes via GitHub.
About the Author
Related Blog Posts
Secure Your API Gateway with NGINX App Protect WAF
As monoliths move to microservices, applications are developed faster than ever. Speed is necessary to stay competitive and APIs sit at the front of these rapid modernization efforts. But the popularity of APIs for application modernization has significant implications for app security.
How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh
When you need an API gateway in Kubernetes, how do you choose among API gateway vs. Ingress controller vs. service mesh? We guide you through the decision, with sample scenarios for north-south and east-west API traffic, plus use cases where an API gateway is the right tool.
Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services
In the second post in our API gateway series, Liam shows you how to batten down the hatches on your API services. You can use rate limiting, access restrictions, request size limits, and request body validation to frustrate illegitimate or overly burdensome requests.
New Joomla Exploit CVE-2015-8562
Read about the new zero day exploit in Joomla and see the NGINX configuration for how to apply a fix in NGINX or NGINX Plus.
Why Do I See “Welcome to nginx!” on My Favorite Website?
The ‘Welcome to NGINX!’ page is presented when NGINX web server software is installed on a computer but has not finished configuring