What Is an API Gateway?

An API gateway accepts API requests from a client, processes them based on defined policies, directs them to the appropriate services, and combines the responses for a simplified user experience. Typically, it handles a request by invoking multiple microservices and aggregating the results. It can also translate between protocols in legacy deployments.

An API gateway accepts API requests from a client, processes them based on defined policies, directs them to the appropriate services, and combines the responses for a simplified user experience. Typically, it handles a request by invoking multiple microservices and aggregating the results. It can also translate between protocols in legacy deployments.

IMAGE ALT TEXT  HERE

API Gateway Capabilities

API gateways commonly implement capabilities that include:

  • Security policy – Authentication, authorization, access control, and encryption
  • Routing policy – Routing, rate limiting, request/response manipulation, circuit breaker, blue-green and canary deployments, A/B testing, load balancing, health checks, and custom error handling
  • Observability policy – Real-time and historical metrics, logging, and tracing

For additional app- and API-level security, API gateways can be augmented with web application firewall (WAF) and denial of service (DoS) protection.

API Gateway Benefits

Deploying an API gateway for app delivery can help:

  • Reduce complexity and speed up app releases by encapsulating the internal application architecture and providing APIs tailored for each client type
  • Streamline and simplify request processing and policy enforcement by centralizing the point of control and offloading non-functional requirements to the infrastructure layer
  • Simplify troubleshooting with granular real-time and historical metrics and dashboards

API Gateway and Microservices Architecture

For microservices‑based applications, an API gateway acts as a single point of entry into the system. It sits in front of the microservices and simplifies both the client implementations and the microservices app by decoupling the complexity of an app from its clients.

In a microservices architecture, the API gateway is responsible for request routing, composition, and policy enforcement. It handles some requests by simply routing them to the appropriate backend service, and handles others by invoking multiple backend services and aggregating the results.

An API gateway might provide other capabilities for microservices such as authentication, authorization, monitoring, load balancing, and response handling, offloading implementation of non-functional requirements to the infrastructure layer and helping developers to focus on core business logic, speeding up app releases.

Learn more about Building Microservices Using an API Gateway on our blog.

API Gateway for Kubernetes

Containers are the most efficient way to run microservices, and Kubernetes is the de facto standard for deploying and managing containerized applications and workloads.

Depending on the system architecture and app delivery requirements, an API gateway can be deployed in front of the Kubernetes cluster as a load balancer (multi-cluster level), at its edge as an Ingress controller (cluster-level), or within it as a service mesh (service-level).

IMAGE ALT TEXT  HERE

For API gateway deployments at the edge and within the Kubernetes cluster, it’s best practice to use a Kubernetes-native tool as the API gateway. Such tools are tightly integrated with the Kubernetes API, support YAML, and can be configured through standard Kubernetes CLI; examples include NGINX Ingress Controller and NGINX Service Mesh.

Learn more about API gateways and Kubernetes in API Gateway vs. Ingress Controller vs. Service Mesh on our blog.

API Gateway and Ingress Gateway or Ingress Controller

Ingress gateways and Ingress controllers are tools that implement the Ingress object, a part of the Kubernetes Ingress API, to expose applications running in Kubernetes to external clients. They manage communications between users and applications (user-to-service or north-south connectivity). However, the Ingress object by itself is very limited in its capabilities. For example, it does not support defining the security policies attached to it. As a result, many vendors create custom resource definitions (CRDs) to expand their Ingress controller’s capabilities and satisfy evolving customer needs and requirements, including use of the Ingress controller as an API gateway.

For example, NGINX Ingress Controller can be used as a full-featured API gateway at the edge of a Kubernetes cluster with its VirtualServer and VirtualServerRouteTransportServer, and Policy custom resources.

API Gateway Is Not the Same as Gateway API

While their names are similar, an API gateway is not the same as the Kubernetes Gateway API. The Kubernetes Gateway API is an open source project managed by the Kubernetes community to improve and standardize service networking in Kubernetes. The Gateway API specification evolved from the Kubernetes Ingress API to solve various challenges around deploying Ingress resources to expose Kubernetes apps in production, including the ability to define fine-grained policies for request processing and delegate control over configuration across multiple teams and roles.

Tools built on the Gateway API specification, such as NGINX Kubernetes Gateway, can be used as API gateways for use cases that include routing requests to specific microservices, implementing traffic policies, and enabling canary and blue‑green deployments.

Watch this quick video where NGINX’s Jenn Gile explains the difference between an API gateway and the Kubernetes Gateway API.

Service Mesh vs API Gateway

service mesh is an infrastructure layer that controls communications across services in a Kubernetes cluster (service-to-service or east-west connectivity). The service mesh delivers core capabilities for services running in Kubernetes, including load balancing, authentication, authorization, access control, encryption, observability, and advanced patterns for managing connectivity (circuit braker, A/B testing, and blue-green and canary deployments), to ensure that communication is fast, reliable, and secure.

Deployed closer to the apps and services, a service mesh can be used as a lightweight, yet comprehensive, distributed API gateway for service-to-service communications in Kubernetes.

Learn more about service mesh in How to Choose a Service Mesh on our blog.

API Gateway and API Management

The terms API gateway and API management are often – but incorrectly – used to describe the same functionality.

An API gateway is a data-plane entry point for API calls that represent client requests to target applications and services. It typically performs request processing based on defined policies, including authentication, authorization, access control, SSL/TLS offloading, routing, and load balancing.

API management is the process of deploying, documenting, operating, and monitoring individual APIs. It is typically accomplished with management-plane software (for example, an API manager) that defines and applies policies to API gateways and developer portals.

Depending on business and functional requirements, an API gateway can be deployed as a standalone component in the data plane, or as part of an integrated API management solution, such as F5 NGINX Management Suite API Connectivity Manager.

Considerations for Choosing an API Gateway

There are several key factors to consider when deciding on requirements for your API gateway:

  • Architecture – Where you deploy the API gateway can impact your choice of tooling, as can the decision to use built-in options from your cloud provider. Do you need the flexibility of a platform and runtime agnostic API gateway?
  • Performance – Performance is critical for high-traffic websites and applications. Does your API gateway deliver the high throughput and low latency you need?
  • Scalability – Your API gateway needs to easily scale to meet increasing traffic demands. Does your API gateway support scaling vertically (high throughput) and horizontally (high availability) to ensure your APIs are always fast and available?
  • Security – API gateways are an important part of a zero-trust architecture. Does your API gateway offer access control (AuthN/AuthZ), mTLS, and other advanced security features like an integrated WAF and OpenAPI schema validation for positive security?
  • Cost – Understand the total cost of ownership (TCO) of the API gateway. What are the costs and tradeoffs of building and maintaining a custom solution vs purchasing an enterprise-grade API gateway?

How NGINX Can Help

NGINX offers several options for deploying and operating an API gateway depending on your use cases and deployment patterns.
Kubernetes‑native tools:

  • NGINX Ingress Controller – Manages app connectivity at the edge of a Kubernetes cluster with API gateway, identity, and observability features
  • NGINX Service Mesh – Developer-friendly solution for service-to-service connectivity, security, orchestration, and observability

Get started by requesting your free 30-day trial of NGINX Ingress Controller with NGINX App Protect WAF and DoS, and download the always free NGINX Service Mesh.

Universal tools:

  • NGINX Plus as an API gateway – Lightweight, high-performance API gateway that can be deployed across cloud, on-premises, and edge environments
  • F5 NGINX Management Suite API Connectivity Manager – Deploy and operate API gateways with developer-friendly tools for API management, governance, and security

To learn more about using NGINX Plus as an API gateway, request your free 30-day trial and see Deploying NGINX as an API Gateway on our blog. To try NGINX Management Suite, request your free 30-day trial.