Since its early days, Kubernetes has included an API – the built-in Ingress resource – for configuring request routing of external HTTP traffic to Services. While it has been widely adopted by users and supported by many implementations (e.g., Ingress controllers), the Ingress resource limits its users in three major ways:
In response to these limitations, the Kubernetes community designed the Gateway API, a new project that provides a better alternative to the Ingress resource. In this blog post, we cover five reasons to try the Gateway API and discuss how it compares with the Ingress resource. We also introduce NGINX Gateway Fabric, our open source project that enables you to start using the Gateway API in your Kubernetes cluster while leveraging NGINX as a data plane.
Also, don’t miss a chance to chat with our engineers working on the NGINX Gateway Fabric project! NGINX, a part of F5, is proud to be a Platinum Sponsor of KubeCon North America 2023, and we hope to see you there! Come meet us at the NGINX booth to discuss how we can help enhance the security, scalability, and observability of your Kubernetes platform.
Note: The Gateway API supports multiple use cases related to Service networking, including the experimental service mesh. That said, this blog post focuses on the Gateway API’s primary use case of routing external traffic to Services in a cluster. Additionally, while the API supports multiple protocols, we limit our discussion to the most common protocol, HTTP.
The Gateway API is a collection of custom resources that provisions and configures a data plane to model Service networking traffic to Services in your cluster.
These are the primary Gateway API resources:
Another critical part of the Gateway API is a Gateway implementation, which is a Kubernetes controller that actually provisions and configures your data plane according to the Gateway API resources.
To learn more about the API, visit the Gateway API project website or watch a video introduction.
These are five key reasons to try the new Gateway API:
Let’s look at each reason in detail.
The Gateway API offers a multitude of features that, in turn, unlock numerous new use cases, some of which may not be fully supported by the Ingress resource.
These use cases include:
For example, below is a request routing rule from an HTTPRoute that splits the traffic between two Kubernetes Services using weights. This enables the canary releases use case.
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: my-app-old
port: 80
weight: 95
- name: my-app-new
port: 80
weight: 5
As a result, the data plane will route 95% of the requests to the Service my-app-old
and the remaining 5% to my-app-new.
Next is an example featuring two rules. One of these rules leverages the Gateway API advanced routing capabilities, using headers and query parameters for routing.
- matches: # rule 1
- path:
type: PathPrefix
value: /coffee
backendRefs:
- name: coffee-v1-svc
port: 80
- matches: # rule 2
- path:
type: PathPrefix
value: /coffee
headers:
- name: version
value: v2
- path:
type: PathPrefix
value: /coffee
queryParams:
- name: TEST
value: v2
backendRefs:
- name: coffee-v2-svc
port: 80
As a result, the data plane routes requests that have the URI beginning with /coffee
to the Service coffee-v2-svc
under two conditions: if the header version is equal to v2
or if the query parameter TEST
is equal to v2
(like /coffee?TEST=v2
in rule 2
). At the same time, the data plane will route all requests for /coffee to coffee-v1-svc
(as seen in rule 1
).
You can read the HTTPRoute documentation to learn about all of the supported features.
The Gateway API comes with a powerful extensibility model that allows an implementation to expose advanced data plane features that are not inherently supported by the API itself. While Ingress controllers work around some of the Ingress resource’s limitations by supporting custom extensions through applied annotations, the Gateway API extensibility model is also superior to the Ingress extensibility model.
For example, below is an example of a resource extended with annotations of NGINX Ingress Controller to enable some advanced NGINX features (an explanation of each feature is added as a comment next to the annotation):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp
annotations:
nginx.org/lb-method: "ip_hash" # choose the ip_hash load-balancing method
nginx.org/ssl-services: "webapp" # enable TLS to the backend
nginx.org/proxy-connect-timeout: "10s" # configure timeouts to the backend
nginx.org/proxy-read-timeout: "10s"
nginx.org/proxy-send-timeout: "10s"
nginx.org/rewrites: "serviceName=webapp rewrite=/v1" # rewrite request URI
nginx.com/jwt-key: "webapp-jwk" # enable JWT authentication of requests
nginx.com/jwt-realm: "Webb App"
nginx.com/jwt-token: "$cookie_auth_token"
nginx.com/jwt-login-url: "https://login.example.com"
spec:
rules:
- host: webapp.example.com
. . .
Annotations were never meant for expressing such a large amount of configuration – they are simple key-value string pairs that lack structure, validation, and granularity. (By lack of granularity, we mean annotations are applied per whole resource, not per part[s] of a resource like an individual routing rule in an Ingress resource.)
The Gateway API includes a powerful annotation-less extensibility model with several extension points, custom filters, policy attachments, and destinations. This model enables Gateway API implementations to offer advanced data plane features via custom resources that provide structure and validation. Additionally, users can apply an extension per part of a resource like a routing rule, which further adds granularity.
For example, this is how a custom filter of an imaginary Gateway API implementation enhances a routing rule in an HTTPRoute by applying a rate limit granularly for the /coffee rule:
rules:
- matches:
- path:
type: PathPrefix
value: /coffee
filters:
- type: ExtensionRef
extensionRef:
group: someproxy.example.com
kind: RateLimit
name: coffee-limit
backendRefs:
- name: coffee
port: 80
The Gateway API implementation consequently applies a filter configured in the coffee-limit
custom resource to the /coffee
rule, where the rate limit specification can look like this:
rateLimit:
rate: 10r/s
key: ${binary_remote_addr}
Note: We showed you a possible extension rather than a concrete one because the NGINX Gateway Fabric project hasn’t yet taken the advantage of the Gateway API extensibility model. However, this will change in the future, as the project will support many extensions to enable users to access advanced NGINX features that are not available through the Gateway API.
The Ingress resource only supports a single user role (application developer), which gets full control over how traffic gets to an application in a Kubernetes cluster. However, that level of control is often not required, and it could even inhibit safely sharing the data plane infrastructure among multiple developer teams.
The Gateway API divides responsibility over provisioning and configuring infrastructure among three roles: infrastructure provider, cluster operator, and application developer. These roles are summarized in the table below.
Role | Owner of the Gateway API Resources | Responsibilities |
---|---|---|
Infrastructure provider | GatewayClass | Manage cluster-related infrastructure** |
Cluster operator | Gateway, GatewayClass* | Manage a cluster for application developers |
Application developer | HTTPRoute | Manage applications |
*If the cluster operator installs and manages a Gateway API implementation rather than using the one from the infrastructure provider, they will own the GatewayClass resource.
**Similar to a cloud provider offering managed Kubernetes clusters.
The above roles enable RBAC-enforced separation of responsibilities. This split works well for organizations in a common situation where a platform team (cluster operator) owns the data plane infrastructure and wants to share it safely among multiple developer teams in a cluster (application developers).
Two aspects of the Gateway API make it extremely portable:
Because of this portability, users can switch from one Gateway API implementation to another with significantly less effort than it takes to switch an Ingress controller.
The Gateway API is a community-driven project. It is also a young project that’s far from being finished. If you’d like to participate in its evolution – whether through proposing a new feature or sharing your feedback – check out the project’s contributing page.
Two steps are needed to try the Gateway API:
NGINX has created a Gateway API implementation – NGINX Gateway Fabric. This implementation uses NGINX as a data plane. To try it out, follow the installation instructions of its latest release. You can also check out our contributing guide if you’d like to ask questions or contribute to the project.
Our documentation includes multiple guides and examples that showcase different use cases enabled by the Gateway API. For additional support, check out the guides on the Kubernetes Gateway API project page.
Note: NGINX Gateway Fabric is a new project that has not reached the level of maturity compared to our similar NGINX Ingress Controller project. Additionally, while NGINX Gateway Fabric supports all core features of the Gateway API (see Reason 1), it doesn’t yet offer Gateway API extensions for popular NGINX features (see Reason 2). In the meantime, those features are available in NGINX Ingress Controller.
The Kubernetes Gateway API is a new community project that addresses the limitations of the Ingress resource. We discussed the top five reasons to try this new API and briefly introduced NGINX Gateway Fabric, an NGINX-based Gateway API implementation.
With NGINX Gateway Fabric, we are focused on a native NGINX implementation of the Gateway API. We encourage you to join us in shaping the future of Kubernetes app connectivity!
Ways you can get involved in NGINX Gateway Fabric include:
To join the project, visit NGINX Gateway Fabric on GitHub.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."