In Container Land, Declarative Configuration is King

F5 Ecosystem | July 17, 2017

Digital transformation on the inside is integral to enabling digital transformation on the outside. One of the foundational components of internal digital transformation is automation, which relies heavily on the control plane. The control plane is where automation happens. In the olden days of computing we referred to it as the “management network” and used protocols like SNMP to provide monitoring, configuration, and control.

Today, the management network still exists, theoretically at least, as the medium over which we perform the same tasks via the control plane. The control plane is a messy land of APIs, master nodes, and even message queues that enable individual components in a complex distributed system to (almost) automatically manage themselves. It is increasingly event-driven, which requires a change in thinking from the centralized command and control models of the past that relied primarily on imperative models of management. That is, a central system instructs components implicitly with specific API calls that cause changes to occur. Today’s environments, on the other hand, rely on declarative models that distribute responsibility for changing themselves.

In no system is this more evident than in containerized environments. From the outside, such systems appear to be nearly rogue in nature; messages and events are published and fired willy-nilly, with no overlord to direct who or what should react to them. The control plane is no longer so much about control as it is distribution across a plane that is more a mesh than the hub and spoke architectures of archaic management systems. In the traditional world we used APIs and protocols to push changes to components. In the digital, containerized world we use APIs to pull the information necessary for a component to change itself.

imperative vs declarative

This new world is reactive and eschews the imperative (API-driven) model of traditional control plane, relying instead on a more open, declarative model to achieve the desired automated end-state.

This is unsurprising. As we’ve increasingly adopted a software-driven approach to everything (under the mantle of DevOps and Cloud and NFV) we’ve simultaneously had to deal with massive operational scale. A hub and spoke, imperative model of management does not efficiently scale as the burden for all changes lie on a central controller capable of communicating via a confusing array of APIs to a nearly limitless group of components. This is a “push” model, in which the manager (controller) pushes changes to each affected component. It becomes the bottleneck that makes or breaks the entire system.

An event-driven model that relies on pull by components is necessary to scale and relieve the burden on the controller which in turn requires components that desire to participate in this control plane be comfortable with a declarative configuration model. Because rather than push changes via an API (imperative), containers are pushing us to pull changes via declarative configurations instead. The onus is on component providers (whether open source or commercial) to correctly subscribe to changes and then pull the appropriate information required to implement that change immediately.

disposable infrastructure icon

If that sounds like infrastructure as code, it should. Declarative configurations are basically code, or at least code artifacts. Automation increasingly depends on its premise that decouples configuration from its service. In an ideal utopian model, these declarative configurations are completely agnostic. That is, they would be readable by any product from any vendor (commercial or open source) that supports that service. For example, a declarative configuration describing the appropriate service (virtual server) and the apps that compromise its pool of resources would be able to be ingested by service X or service Y and implemented.

Kubernetes resource files are a good example of a declarative configuration model in which what is desired is described, but nowhere is how prescribed. This is markedly different from systems that rely on infrastructure APIs that require the implementation to be familiar – sometimes intimately – with how to achieve the desired results.

The declarative model enables, too, the ability to treat infrastructure like cattle. If one fails, it’s a simple thing to kill it and launch a new instance. All the configuration it needs is in the resource file; there’s no “save your work or it will be lost” button because there’s no work to lose. This is almost immutable, and is definitely disposable infrastructure and it’s a necessity in systems that are changing by the minute, if not by the second, to minimize the impact of failure.

As we increasingly move toward automated systems of scale and – dare I posit? – security, we will need to embrace declarative models for management of the myriad devices and services comprising the application data path or risk being buried under an avalanche of operational debt incurred from manual methods of integration and automation.

Share
Tags: DevOps, 2017, Tech

About the Author

Lori Mac Vittie
Lori Mac VittieDistinguished Engineer and Chief Evangelist

More blogs by Lori Mac Vittie

Related Blog Posts

The everywhere attack surface: EDR in the network is no longer optional
F5 Ecosystem | 11/12/2025

The everywhere attack surface: EDR in the network is no longer optional

All endpoints can become an attacker’s entry point. That’s why your network needs true endpoint detection and response (EDR), delivered by F5 and CrowdStrike.

F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift
F5 Ecosystem | 11/11/2025

F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift

F5 collaborates with Red Hat to deliver a solution that combines the high-performance app delivery of F5 NGINX with Red Hat OpenShift’s enterprise Kubernetes capabilities.

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5 Ecosystem | 10/28/2025

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture

F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.

F5 Silverline Mitigates Record-Breaking DDoS Attacks
F5 Ecosystem | 08/26/2021

F5 Silverline Mitigates Record-Breaking DDoS Attacks

Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.

Volterra and the Power of the Distributed Cloud (Video)
F5 Ecosystem | 04/15/2021

Volterra and the Power of the Distributed Cloud (Video)

How can organizations fully harness the power of multi-cloud and edge computing? VPs Mark Weiner and James Feger join the DevCentral team for a video discussion on how F5 and Volterra can help.

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
F5 Ecosystem | 12/08/2020

Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies

David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
In Container Land, Declarative Configuration is King | F5