Digital transformation on the inside is integral to enabling digital transformation on the outside. One of the foundational components of internal digital transformation is automation, which relies heavily on the control plane. The control plane is where automation happens. In the olden days of computing we referred to it as the “management network” and used protocols like SNMP to provide monitoring, configuration, and control.
Today, the management network still exists, theoretically at least, as the medium over which we perform the same tasks via the control plane. The control plane is a messy land of APIs, master nodes, and even message queues that enable individual components in a complex distributed system to (almost) automatically manage themselves. It is increasingly event-driven, which requires a change in thinking from the centralized command and control models of the past that relied primarily on imperative models of management. That is, a central system instructs components implicitly with specific API calls that cause changes to occur. Today’s environments, on the other hand, rely on declarative models that distribute responsibility for changing themselves.
In no system is this more evident than in containerized environments. From the outside, such systems appear to be nearly rogue in nature; messages and events are published and fired willy-nilly, with no overlord to direct who or what should react to them. The control plane is no longer so much about control as it is distribution across a plane that is more a mesh than the hub and spoke architectures of archaic management systems. In the traditional world we used APIs and protocols to push changes to components. In the digital, containerized world we use APIs to pull the information necessary for a component to change itself.
This new world is reactive and eschews the imperative (API-driven) model of traditional control plane, relying instead on a more open, declarative model to achieve the desired automated end-state.
This is unsurprising. As we’ve increasingly adopted a software-driven approach to everything (under the mantle of DevOps and Cloud and NFV) we’ve simultaneously had to deal with massive operational scale. A hub and spoke, imperative model of management does not efficiently scale as the burden for all changes lie on a central controller capable of communicating via a confusing array of APIs to a nearly limitless group of components. This is a “push” model, in which the manager (controller) pushes changes to each affected component. It becomes the bottleneck that makes or breaks the entire system.
An event-driven model that relies on pull by components is necessary to scale and relieve the burden on the controller which in turn requires components that desire to participate in this control plane be comfortable with a declarative configuration model. Because rather than push changes via an API (imperative), containers are pushing us to pull changes via declarative configurations instead. The onus is on component providers (whether open source or commercial) to correctly subscribe to changes and then pull the appropriate information required to implement that change immediately.
If that sounds like infrastructure as code, it should. Declarative configurations are basically code, or at least code artifacts. Automation increasingly depends on its premise that decouples configuration from its service. In an ideal utopian model, these declarative configurations are completely agnostic. That is, they would be readable by any product from any vendor (commercial or open source) that supports that service. For example, a declarative configuration describing the appropriate service (virtual server) and the apps that compromise its pool of resources would be able to be ingested by service X or service Y and implemented.
Kubernetes resource files are a good example of a declarative configuration model in which what is desired is described, but nowhere is how prescribed. This is markedly different from systems that rely on infrastructure APIs that require the implementation to be familiar – sometimes intimately – with how to achieve the desired results.
The declarative model enables, too, the ability to treat infrastructure like cattle. If one fails, it’s a simple thing to kill it and launch a new instance. All the configuration it needs is in the resource file; there’s no “save your work or it will be lost” button because there’s no work to lose. This is almost immutable, and is definitely disposable infrastructure and it’s a necessity in systems that are changing by the minute, if not by the second, to minimize the impact of failure.
As we increasingly move toward automated systems of scale and – dare I posit? – security, we will need to embrace declarative models for management of the myriad devices and services comprising the application data path or risk being buried under an avalanche of operational debt incurred from manual methods of integration and automation.