There’s as much confusion as there is chaos in container land. Every day seems to bring some new capability or component to the world of container orchestration environments. That’s necessary, because it’s still maturing as use of containers expands beyond the experimental into the existential.
Speed and scale are amongst the two primary drivers of container deployments. The former is as much about development as it is delivery, and thus the focus on scale. But not just vanilla protocol scale, we’re talking about application scale.
The distinction is important. Containers have been voted most likely to contain microservices, and one of the cardinal rules of microservices is communication via API only. An API that is based on HTTP – not TCP – and thus requires a smarter solution for scale.
Most container orchestration environments come “out of the box” with proxies capable of vanilla scale. That means plain old load balancing (POLB) at the TCP layer. IP addresses and ports are the lingua franca of these proxies. While they do fine in an environment where services are differentiated based on an IP address/port combination, they don’t do so well for applications (services) that are differentiated by HTTP layer characteristics – like API version, or URI, or host name. Those are app layer (HTTP) constructs, and require smarter proxies to both route and scale with the speed desired. These constructs must be taken into consideration upon receipt of a request from a client-side entity, something most vanilla scale solutions for containers can’t provide.
In response to this need rises the notion of Ingress* control. Ingress control is basically app or HTTP routing or layer 7 switching or content switching or any other of a dozen or so names the capability has gone by since the turn of the century. Ingress control assumes service differentiation at the application (HTTP) layer, and accordingly acts upon it when making routing and scaling decisions inside the container environment.
But you can’t just slap a F5 BIG-IP in front of a container environment and call it Ingress control. That’s because an Ingress controller also needs to be integrated with the container orchestration environment to achieve the scale and speed desired. To do that, you need something that lives inside the container environment that natively speaks container orchestration and BIG-IP.
That’s what the BIG-IP Controller for Kubernetes does. It’s a Docker container that runs in a Kubernetes Pod and enables you to use a BIG-IP as a Kubernetes Ingress controller. That means it can read the Kubernetes Ingress resource and automatically configure BIG-IP with the appropriate objects to make sure requests are scaled based on the app layer constructs you desire.
Now, prior to the availability of this controller, folks tended to use BIG-IP to “spray” traffic across a second layer of proxies running inside the container orchestration environment. Those proxies provided Ingress control. There’s a few good reasons to stop doing that, including the recursive headache of running your availability service inside the thing it’s providing availability for.
Other good reasons include:
Whatever the reason might be, the reality is that you can use a BIG-IP as an Ingress controller for Kubernetes. You don’t need two different tiers to scale. Eliminating that second tier of scale will improve speed (of delivery and deployment) and simplify deployments while providing a platform on which you can enable a wide variety of advanced services for security, speed, and scale.
docker pull f5networks/k8s-bigip-ctlr
* Yes, the capital “I” is important, as it distinguishes from traditional network term “ingress” which simply refers to “access into the environment” whereas "Ingress” is used to refer to “HTTP routing”. Yes, we do tend to make things more difficult than they have to be, but such is the world in which developers are implementing network constructs and redefining more than just how apps are delivered.