We are happy to announce release 1.8.0 of NGINX Ingress Controller for Kubernetes. This release builds upon the development of our supported solution for Ingress load balancing on Kubernetes platforms, including Red Hat OpenShift, Amazon Elastic Container Service for Kubernetes (EKS), the Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), IBM Cloud Private, Diamanti, and others.
With release 1.8.0, we continue our commitment to providing a flexible, powerful and easy-to-use Ingress Controller, which you can configure with both Kubernetes Ingress Resources and NGINX Ingress Resources:
Release 1.8.0 brings the following major enhancements and improvements:
NGINX Ingress Controller for Kubernetes is a daemon that runs alongside NGINX Open Source or NGINX Plus instances in a Kubernetes environment. The daemon monitors Kubernetes Ingress resources and NGINX Ingress resources to discover requests for services that require Ingress load balancing. The daemon then automatically configures NGINX or NGINX Plus to route and load balance traffic to these services.
Multiple NGINX Ingress controller implementations are available. The official NGINX implementation is high‑performance, production‑ready, and suitable for long‑term deployment. We focus on providing stability across releases, with features that can be deployed at enterprise scale. We provide full technical support to NGINX Plus subscribers at no additional cost, and NGINX Open Source users benefit from our focus on stability and supportability.
As more businesses push for accelerating digital transformation, attacks against applications are increasing. New vulnerability attacks with increased complexity are appearing on a daily basis. As an intelligent web application firewall (WAF), NGINX App Protect can effectively mitigate malicious attacks at high speeds, unlike network firewalls and anti‑virus solutions.
Release 1.8.0 embeds NGINX App Protect within NGINX Ingress Controller, and you use the familiar Kubernetes API to manage App Protect security policies and configuration.
The consolidated solution brings three unique benefits:
For a more detailed overview, see the companion blog, Securing Your Apps in Kubernetes with NGINX App Protect. For complete instructions for configuring and troubleshooting NGINX App Protect in NGINX Ingress Controller, see the Ingress Controller documentation. For information about other App Protect use cases, see the NGINX App Protect documentation.
You must have subscriptions for NGINX Plus and NGINX App Protect to build App Protect into the NGINX Ingress Controller image.
In previous releases, you could insert NGINX configuration into standard Kubernetes Ingress resources using snippets and custom templates. Snippets allow you to assign NGINX‑native configuration to most NGINX configuration contexts. Custom templates allow you to generate other blocks of the configuration, such as the default server, and apply them with ConfigMaps. Release 1.8.0 extends the use of snippets and custom templates to the NGINX Ingress resources, VirtualServer and VirtualServerRoute.
Both snippets and custom templates enable admins to implement configurations for use cases and functionality that are not exposed through either standard Kubernetes Ingress resources or NGINX Ingress resources. One particularly important use case is when modernizing non‑Kubernetes applications which are already proxied by NGINX by migrating them to Kubernetes – with snippets and custom templates you can continue using NGINX features that are not yet implemented in the NGINX Ingress resources, such as caching and rate limiting.
This sample configuration shows how to configure caching and rate limiting with snippets that apply different settings in different NGINX configuration contexts:
apiVersion: k8s.nginx.org/v1kind: VirtualServer
metadata:
name: cafe
namespace: cafe
spec:
http-snippets: |
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;
proxy_cache_path /tmp keys_zone=one:10m;
host: cafe.example.com
tls:
secret: cafe-secret
server-snippets: |
limit_req zone=mylimit burst=20;
upstreams:
- name: tea
service: tea-svc
port: 80
- name: coffee
service: coffee-svc
port: 80
routes:
- path: /tea
location-snippets: |
proxy_cache one;
proxy_cache_valid 200 10m;
action:
pass: tea
- path: /coffee
action:
pass: coffee
It’s important to quality‑check snippets and custom templates carefully: an invalid snippet or template prevents NGINX from reloading its configuration. While traffic processing is not interrupted in this case (NGINX continues running with the existing valid configuration), additional mechanisms are available to increase stability:
Release 1.8.0 introduces URI rewriting and modification of request and response headers, on both upstream and downstream connections. This essentially decouples end users from the backend by adding a filtering and modification layer. This functionality is now available directly in the VirtualServer and VirtualServerRoute resources and is configurable for specific paths.
Being able to rewrite URIs and modify request and response headers lets you quickly address unforeseen reliability issues in production environments without requiring backend developers to troubleshoot or rewrite code. That in turn increases the availability, security, and resilience of your application workloads in Kubernetes.
Typical use cases include:
You configure header modifications and URI rewrites in a VirtualServer or VirtualServerRoute under the proxy
action, where requests are passed to an upstream.
In this sample configuration, when the Ingress Controller proxies client requests for the /tea path, it modifies the Content-Type
request header before forwarding the request to the upstreams. It also rewrites the /tea URI to / as specified in the rewritePath
field. Before passing the server’s response back to the client, the Ingress Controller adds a new Access-Control-Allow-Origin
header and hides, ignores, and passes several others.
apiVersion: k8s.nginx.org/v1 kind: VirtualServer
metadata:
name: cafe
spec:
host: cafe.example.com
upstreams:
- name: tea
service: tea-svc
port: 80
- name: coffee
service: coffee-svc
port: 80
routes:
- path: /tea
action:
proxy:
upstream: tea
requestHeaders:
pass: true
set:
- name: Content-Type
value: application/json
responseHeaders:
add:
- name: Access-Control-Allow-Origin
value: "*"
always: true
hide:
- x-internal-version
ignore:
- Expires
- Set-Cookie
pass:
- Server
rewritePath: /
- path: /coffee
action:
pass: coffee
This sample configuration shows how you can embed header manipulation and URI rewriting in an action
block under a conditional match, for even more sophisticated traffic handling:
apiVersion: k8s.nginx.org/v1 kind: VirtualServer
metadata:
name: cafe
spec:
host: cafe.example.com
upstreams:
- name: tea
service: tea-svc
port: 80
- name: tea-post
service: tea-post-svc
port: 80
routes:
- path: /tea
matches:
- conditions:
- variable: $request_method
value: POST
action:
proxy:
upstream: tea-post
requestHeaders:
pass: true
set:
- name: Content-Type
value: application/json
rewritePath: /
action:
pass: tea
In most organizations, multiple teams collaborate on application delivery. For example, NetOps is responsible for opening ports and filtering traffic, SecOps ensures a consistent security posture, and multiple application owners provision Ingress load‑balancing policies for multiple backend application services.
Because every organization delegates roles in different ways, NGINX Ingress Controller gives you maximum flexibility in defining which teams own specific parts of the configuration, then assembles and executes all the pieces together. This supports multi‑tenancy while making delegation as simple as possible for the administrator.
In support of multi‑tenancy, NGINX Ingress Controller release 1.8.0 introduces policies and the first supported policy type: IP address‑based access control lists (ACLs).
Note: In release 1.8.0, policies can be referenced only in VirtualServer resources, where they apply to the server
context. In a future release we plan to extend policy support to VirtualServerRoute resources, where policies apply to the location
context. We also plan to implement policies for other functions, such as rate limiting.
The following diagram illustrates how a policy can be used. The VirtualServer resource on the left references a policy called webapp-policy, and the configuration snippets on the right each define a policy with that name that filters (allows or denies, respectively) connections from the 10.0.0.0/8 subnet. Assigning the name to both policies means you can toggle between them by applying the intended policy to the VirtualServer using the Kubernetes API.
In environments with many Ingress resources (standard Kubernetes, NGINX, or both), it’s possible for Pods to come online before the NGINX configuration is fully loaded, causing traffic to be blackholed. In line with our commitment to production‑grade reliability, release 1.8.0 introduces a Kubernetes readiness probe to ensure that traffic is not forwarded to a specific Pod until the Ingress Controller is ready to accept traffic (that is, the NGINX configuration is loaded and no reloads are pending).
In previous releases, it was possible for multiple instances of NGINX Ingress Controller to coexist on the same cluster, but only by using the kubernetes.io/ingress.class
annotation in the standard Kubernetes Ingress resource to specify the target NGINX Ingress Controller deployment. In release 1.8.0, we have added the ingressClassName
field to the VirtualServer and VirtualServerRoute resources for the same purpose. We have also updated our Helm charts to support the deployment of multiple NGINX Ingress Controller deployments.
The output from the kubectl
describe
command indicates whether the configuration for an Ingress controller was successfully updated (in the Events
section) and lists the IP addresses and ports of its external endpoints (in the Status
section). In release 1.8.0 we have extended the VirtualServer and VirtualServerRoute resources to return this information. For VirtualServerRoute resources, the Status
section also names the parent VirtualServer resource.
As an app owner, you might find that the parent VirtualServer of a VirtualServerRoute resource has been removed, making your application inaccessible. You can now troubleshoot the situation by issuing the kubectl
describe
command for the relevant VirtualServer and VirtualServerRoute resources.
This sample command shows that the configuration for the cafe VirtualServer was applied successfully – in the Events
section, Type
is Normal
and Reason
is AddedorUpdated
.
$ kubectl describe vs cafe. . .
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 16s nginx-ingress-controller Configuration for default/cafe was added or updated
This sample shows that the configuration for the cafe VirtualServer was rejected as invalid – in the Events
section, Type
is Warning
and Reason
is Rejected
. The reason is that there were two upstreams both named tea.
$ kubectl describe vs cafe. . .
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Rejected 12s nginx-ingress-controller VirtualServer default/cafe
is invalid and was rejected: spec.upstreams[1].name: Duplicate value: "tea"
In the Status
section of the output, the Message
and Reason
fields report the same information as in the Events
section, along with the IP address and ports for each external endpoint.
$ kubectl describe vs cafe. . .
Status:
External Endpoints:
Ip: 12.13.23.123
Ports: [80,443]
Message: VirtualServer default/cafe is invalid and was rejected: spec.upstreams[1].name: Duplicate value: "tea"
Reason: Rejected
For the complete changelog for release 1.8.0, see the Release Notes.
To try NGINX Ingress Controller with NGINX Plus and NGINX App Protect, start your free 30-day trial today or contact us to discuss your use cases.
To try NGINX Ingress Controller with NGINX Open Source, you can obtain the release source code, or download a prebuilt container from DockerHub.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."