This componentization of IT is like the componentization of the applications it is tasked with securing and delivering. It's estimated that 80 to 90% of modern applications are composed of third-party components, most of which are open source. The benefits to doing so include speed, responsiveness to change (agility), and a reduction in the cost to create the software. After all, if someone else already wrote the code for a wheel, why reinvent it?
There are no estimates as to just how componentized IT may be today, but the answer to how componentized will it be in the future is clear: Very.
We don't build our own monitoring systems anymore. We adopt one, like Prometheus. We don't develop our own search engines; we integrate with Elastic Search or Lucerne. We don't have to design and develop formation and infrastructure controllers; we have Helm and Terraform. We're no longer asked about integrating with systems; we are asked about our support for ecosystems.
We build systems out of a software stack rather than developing each component ourselves.
The Ripple Effect of Componentization
This system-level thinking is pervasive in development and it's beginning to have a profound impact on the way all software—commercial and custom—is developed. It is also having a significant impact on the way we architect the network.
A few years ago I noted that microservices were breaking up the network. This remains a break-up in progress, for reasons that are closely tied to the mindset of DevOps. That is, DevOps is more likely to think in terms of componentized systems, particularly when influenced by cloud. As DevOps continues to encroach on traditional NetOps and operations turf, they bring with them their way of thinking. That means stacks instead of solutions.
This perspective leads naturally to the adoption of individual application services that better fit the mode of operation and thinking in which DevOps operates today. Single-purpose, functionally focused application services are used to compose a data path rather than construct one.
That means load balancing is load balancing. Ingress control is ingress control. And an API gateway is an API gateway. With a variety of application services, operational artisans compose (assemble) a data path that stretches from code (the app) to the customer (the client).

We can see this in the extraordinary adoption rates of targeted services like API gateway, ingress control, and bot defense we saw in this year's State of Application Services report.
This shift has not gone unnoticed. Just as digital transformation continues to force business to redefine itself and decompose into services represented by APIs and applications (digital capabilities), it dramatically changes the way we design, develop, and deliver application services.
Climbing up the Stack
IP-based routing has always been the way data paths are architected. Route this traffic here, and this type of traffic there, and if there's something in the payload that matches X then route the traffic over there. It's very network specific and thus tightly couples the data path to the network on which it's deployed.
That makes it difficult to replicate in other environments, like a public cloud. While you can likely reuse policies, you won't be able to take advantage of the configuration binding the data path to the network.
Both containers and cloud are basically forcing data paths to move up the stack and be assembled at the application layer from application services. That's much more portable across environments because you're operating on metadata like host names or tags and labels that are not bound to the network.
Ultimately that means we need to shift away from configurations to policies that can assemble data paths without being bound to IP addresses and environments.
There is no doubt that we're moving from solutions to stacks, from manual processes to pipelines. As we expand our digital capabilities across business and operations, the need for composition and control over the data path will continue to move up the stack and rely more heavily on the app services that direct it.
About the Author

Related Blog Posts

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.
F5 Silverline Mitigates Record-Breaking DDoS Attacks
Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.
F5 Silverline: Our Data Centers are your Data Centers
Customers count on F5 Silverline Managed Security Services to secure their digital assets, and in order for us to deliver a highly dependable service at global scale we host our infrastructure in the most reliable and well-connected locations in the world. And when F5 needs reliable and well-connected locations, we turn to Equinix, a leading provider of digital infrastructure.
Volterra and the Power of the Distributed Cloud (Video)
How can organizations fully harness the power of multi-cloud and edge computing? VPs Mark Weiner and James Feger join the DevCentral team for a video discussion on how F5 and Volterra can help.
Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.
The Internet of (Increasingly Scary) Things
There is a lot of FUD (Fear, Uncertainty, and Doubt) that gets attached to any emerging technology trend, particularly when it involves vast legions of consumers eager to participate. And while it’s easy enough to shrug off the paranoia that bots...
