While the standard Kubernetes Ingress resource is great for provisioning and configuring basic Ingress load balancing, it doesn’t include the kind of customization features required to make Kubernetes production‑grade. Instead, non‑NGINX users are left to use annotations, ConfigMaps, and custom templates which are error‑prone, difficult to use, and not secure, and lack fine‑grained scoping. NGINX Ingress resources are our answer to this problem.
NGINX Ingress resources are available for both the NGINX Open Source and NGINX Plus-based versions of NGINX Ingress Controller. They provide a native, type‑safe, and indented configuration style which simplifies implementation of Ingress load balancing. In this blog, we focus on two features introduced in NGINX Ingress Controller 1.11.0 that make it easier to configure WAF and load balancing policies:
NGINX Ingress Controller 1.11.0 extends the TransportServer (TS) resource in the following areas:
Note: The additions to the TransportServer resource in release 1.11.0 are a technology preview under active development. They will be graduated to a stable, production‑ready quality standard in a future release.
In NGINX Ingress Controller, we introduced config snippets for the VirtualServer and VirtualServerRoute (VS and VSR) resources which enable you to natively extend NGINX Ingress configurations for HTTP‑based clients. Release 1.11.0 introduces snippets for TS resources, so you can easily leverage the full range of NGINX and NGINX Plus capabilities to deliver TCP/UDP‑based services. For example, you can use snippets to add deny
and allow
directives that use IP addresses and ranges to define which clients can access a service
apiVersion: k8s.nginx.org/v1alpha1kind: TransportServer
metadata:
name: cafe
spec:
host: cafe.example.com
serverSnippets: |
deny 192.168.1.1;
allow 192.168.1.0/24;
upstreams:
- name: tea
service: tea-svc
port: 80
To monitor the health of a Kubernetes cluster, NGINX Ingress Controller not only considers Kubernetes probes which are local to application pods, but also keeps tabs on the status of the network between TCP/UDP‑based upstream services, with passive health checks to assess the health of transactions in flight and active health checks (exclusive to NGINX Plus) to probe endpoints periodically with synthetic connection requests.
Health checks can be very useful for circuit breaking and handling application errors. You can customize the health check using parameters in the healthCheck
field of the TS resource that set the interval between probes, the probe timeout, delay times between probes, and more.
Additionally, you can set the upstream service and port destination of health probes from NGINX Ingress Controller. This is useful in situations where the health of the upstream application is exposed on a different listener by another process or subsystem which monitors multiple downstream components of the application.
ingressClassName
When you update and apply a TS resource, it’s useful to verify that the configuration is valid and was applied successfully to the corresponding Ingress Controller deployment. Release 1.11.0 introduces the ingressClassName
field and status reporting for the TS resource. The ingressClassName
field ensures the TS resource is processed by a particular Ingress Controller deployment in environments where you have multiple deployments.
To display the status of one or all TS resources, run the kubectl
get
transportserver
command; the output includes state (Valid
or Invalid
), the reason for the most recent update, and (for a single TS) a custom message.
$ kubectl get transportserver NAME STATE REASON AGE
dns-tcp Valid AddedOrUpdated 47m
$ kubectl describe transportserver dns-tcp
. . .
Status:
Message: Configuration for default/dns-tcp was added or updated
Reason: AddedOrUpdated
State: Valid
If multiple TS resources contend for the same host/listener, NGINX Ingress Controller selects the TS resource with the oldest time stamps, assuring a deterministic outcome in that situation.
NGINX Ingress resources not only make configuration easier and more flexible, they also enable you to delegate traffic control to different teams and impose stricter privilege restrictions on users that own application subroutes, as defined in VirtualServerRoute (VSR) resources. By giving the right teams access to the right Kubernetes resources, NGINX Ingress resources give you fine‑grained control over networking resources and reduce potential damage to applications if users are compromised or hacked.
Release 1.11.0 introduces a native web application firewall (WAF) Policy
object to extend these benefits to configuration of NGINX App Protect in your Kubernetes deployments. The policy leverages the APLogConf and APPolicy objects introduced in release 1.8.0 and can be attached to both VirtualServer (VS) and VSR resources. This means that security administrators can have ownership over the full scope of the Ingress configuration with VS resources, while delegating security responsibilities to other teams by referencing VSR resources.
In the following example, the waf-prod
policy is applied to users being routed to the webapp-prod
upstream. To delegate security responsibilities for the /v2
route across namespaces owned by different teams, the highlighted route
directive references a VSR resource.
apiVersion: k8s.nginx.org/v1kind: VirtualServer
metadata:
name: webapp
spec:
host: webapp.example.com
policies:
- name: waf-prod
tls:
secret: app-secret
upstreams:
- name: webapp-prod
service: webapp-svc
port: 80
routes:
- path: /v2
route: test/test
- path: /v1
action:
pass: webapp-prod
The teams that manage the test
namespace can set their own parameters and WAF policies using VSR resources in that namespace.
apiVersion: k8s.nginx.org/v1kind: VirtualServerRoute
metadata:
name: test
namespace: test
spec:
host: webapp.example.com
upstreams:
- name: webapp
service: webapp-test-svc
port: 80
subroutes:
- path: /v2
policies:
- name: waf-test
action:
pass: webapp
This example separates tenants by namespace and applies a different WAF policy for the webapp-test-svc
service in the test
namespace. It illustrates how delegating resources to different teams and encapsulating them with objects simplifies testing new functionalities without disrupting applications in production.
With NGINX Ingress Controller 1.11.0 we continue our commitment to providing a production‑grade Ingress controller that is flexible, powerful, and easy to use. In addition to WAF and TS improvements, release 1.11.0 includes the following enhancements:
Policy
objects before they are applied to the Ingress configurationBuilding on the improvements to annotation validation introduced in release 1.10.0, we are now validating the following additional annotations:
Annotation | Validation |
---|---|
nginx.org/client-max-body-size |
Must be a valid offset |
nginx.org/fail-timeout |
Must be a valid time |
nginx.org/max-conns |
Must be a valid non‑negative integer |
nginx.org/max-fails |
Must be a valid non‑negative integer |
nginx.org/proxy-buffer-size |
Must be a valid size |
nginx.org/proxy-buffers |
Must be a valid proxy buffer spec |
nginx.org/proxy-connect-timeout |
Must be a valid time |
nginx.org/proxy-max-temp-file-size |
Must be a valid size |
nginx.org/proxy-read-timeout |
Must be a valid time |
nginx.org/proxy-send-timeout |
Must be a valid time |
nginx.org/upstream-zone-size |
Must be a valid size |
If the value of the annotation is not valid when the Ingress resource is applied, the Ingress Controller rejects the resource and removes the corresponding configuration from NGINX.
The kubectl
get
policy
command now reports the policy’s state (Valid
or Invalid
) and (for a single TS) a custom message and the reason for the most recent update.
$ kubectl get policy NAME STATE AGE
webapp-policy Valid 30s
$ kubectl describe policy webapp-policy
. . .
Status:
Message: Configuration for default/webapp-policy was added or updated
Reason: AddedOrUpdated
State: Valid
NGINX Ingress Controller can now be used as the Ingress controller for apps running inside an Istio service mesh. This allows users to continue using the advanced capabilities that NGINX Ingress Controller provides on Istio‑based environments without resorting to workarounds. This integration involves two requirements:
To satisfy the first requirement, include the following items in the annotations
field of your NGINX Ingress Deployment file.
annotations: traffic.sidecar.istio.io/includeInboundPorts: ""
traffic.sidecar.istio.io/excludeInboundPorts: "80,443"
traffic.sidecar.istio.io/excludeOutboundIPRanges: "10.90.0.0/16,10.45.0.0/16" sidecar.istio.io/inject: 'true'
The second requirement is achieved by a change to the behavior of the requestHeaders
field. In previous releases, with the following configuration two Host
headers were sent to the backend: $host
and the specified value, bar.example.com
.
apiVersion: k8s.nginx.org/v1kind: VirtualServer
metadata:
name: foo
spec:
host: foo.example.com
upstreams:
- name: foo
port: 8080
service: backend-svc
use-cluster-ip: true
routes:
- path: "/"
action:
proxy:
upstream: foo
requestHeaders:
set:
- name: Host
value: bar.example.com
In release 1.11.0 and later, only the specified value is sent. To send $host
, omit the requestHeaders
field entirely.
The upstream endpoints in the NGINX Ingress Controller configuration can now be populated with service/cluster‑IP addresses, instead of the individual IP addresses of pod endpoints. To enable NGINX Ingress Controller to route traffic to Cluster‑IP services, include the use-cluster-ip:
true
field in the upstreams
section of your VS or VSR configuration:
upstreams: - name: tea
service: tea-svc
port: 80
use-cluster-ip: true
- name: coffee
service: coffee-svc
port: 80
use-cluster-ip: true
For the complete changelog for release 1.11.0, see the Release Notes.
To try NGINX Ingress Controller for Kubernetes with NGINX Plus and NGINX App Protect, start your free 30-day trial today or contact us to discuss your use cases.
To try NGINX Ingress Controller with NGINX Open Source, you can obtain the release source code, or download a prebuilt container from DockerHub.
For a discussion of the differences between Ingress controllers, check out Wait, Which NGINX Ingress Controller for Kubernetes Am I Using? on our blog.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."