James Ward, who writes code (his description), recently wrote up a great piece in which he compared application deployment today versus ten years ago. One of the better kept secrets in the IT industry at large is that changes in application architectures and deployment has a direct impact on application delivery (the actual process of making sure applications are fast, secure, and available for its consumers). The changes today being wrought by microservices, for example, are significant in how they are going to change the network architecture as well as its deployment model.
I know, I know. You’re thinking that moves toward microservices and containerization and cloud abstract the network, making applications less dependent on them. And that is true, from the perspective of developers and perhaps even ops. But a network is still required; the data exchanged still needs to flow over those pipes and the services that reside in the network are still responsible for many functions that make apps go fast, keep them secure, and assure availability. Changes in where and how applications are composed and deployed necessarily, then, impact those services even if the application is not aware of their existence (and honestly, they should never even know. They’re like guardian angels that way.)
So given the relationship, even if it’s unrequited by the application, James’ piece inspired a parallel look at the changes in how we deliver applications in the past ten years.
Back in 2005, network teams (NetOps) deployed the various application services necessary to deliver applications in what we like to call a “conga line.” These were individual point solutions. If you needed something to make that app faster, you deployed a web performance optimization product on its own personal, private box. Another box for load balancing, another for app security, and so on. Eventually there were long lines of many boxes, each of which had to be traversed in both directions.
In 2004 application delivery controllers emerged (but were very immature in 2005) and began to evolve toward what we have today, which is a platform. These platforms provide common functionality and processing and can be extended with modules (or plug-ins or add-ons or whatever you’d like to call them). The platform approach greatly reduces the time spent “on the wire”, much in the same way containerization and virtualization reduce the amount of time spent traveling between applications and services. It also offers the ability to reduce operational costs by sharing a common foundation – the platform – that normalizes management across the various application services needed to deliver an application today.
The evolution from product to platform is advantageous as application deployment moves into more disposable, decomposed architectures leveraging emerging technology like containers and microservices and deployment in cloud environments because they can be used to deploy a variety of application services, or just one, using the same standardized core. This means less administrative overhead as the footprint of deployed application services expands and the ability to add services as needed as applications grow.
In 2005, web-based GUIs were just coming into standard usage and the primary method of provisioning and configuring application services was through the CLI. This process, like all manual processes, took time and, as application services grew more complex, took even more time. The introduction of problems due to human error and the resulting impact (being in the network exponentially increases the blast radius of misconfiguration) meant oversight that time even more time. Copy and paste was prevalent, but not foolproof, and the administrative overhead of managing services was significant enough to ensure that only the more important applications were afforded the benefits of these services.
Fast forward to 2015 and the DevOps revolution. Programmability – both in APIs and template-based configurations – are changing everything, even in the network. Application services can now be automated via APIs using popular frameworks like Puppet and Chef, pre-integrated with orchestration solutions like Cisco APIC and VMware NSX, and custom-driven by Python, Perl, bash and curl. Application service templates enable standardization, reuse and encourage treating infrastructure “as code” to enable continuous delivery practices to extend into the network.
While not as ubiquitous as continuous delivery within application development, application delivery has evolved (and is continuing to evolve) to support programmable deployment options that reduce time to market through faster service provisioning as well as lowering operational cost through automation and reuse.
In 2005 the rush was on to build bigger, better, faster and more capable network hardware. Increasing Ethernet speeds and an explosion of web-based applications meant complementary expansion of the big iron deployed in the network to support application service requirements.
Today the focus is on density and optimal utilization of resources. That means virtualization and cloud computing, both of which are supported by the virtualization of application delivery platforms. Application delivery platforms are now capable of being deployed in cloud environments like AWS, Microsoft Azure, and Rackspace as well as in a virtual appliance deployable on either purpose-built or off-the-shelf hardware. This capability is a requirement, not just to support cloud environments, but to adapt to emerging architectures like microservices. Microservices and servers themselves today are, as James Ward notes, “disposable, immutable, and ephemeral…. “
This means that many of the application services moving into the domain of DevOps – load balancing, application security, and optimization – must meet the criteria for a software-defined data center. A a minimum they must be able to fit into an immutable, disposable model at scale. Many application delivery platforms are there, today, and combined with programmable deployment options, are ready to meet the challenge.
|Network Team / DevOps
In 2005 - and in many organizations yet today – the network team is wholly responsible for deploying and managing application delivery. Every aspect of delivery, from procurement to provisioning to configuration was and often still is handled by network operations. This disjointed model meant delays as requirements were hashed out, tickets created, and engineers manually provisioned and configured the application services required to deliver an app.
Today this is still the case in many organizations, but the impact of cloud and microservices combined with adoption of DevOps is changing that. The desire for IT as a Service is strong, with a shift in responsibility for configuration of application services to operations or even development teams. There is a need for application affine services to both physically and topologically be located closer to the app. This puts DevOps firmly in the drivers seat and responsible for provisioning and configuring those services.
That doesn’t mean the network team is washing its hands of application delivery. There is a significant segment of applications and corporate concerns that require application services delivered in a more traditional manner focused on achieving reliability rather than agility. Those services are still, and will likely remain, in the domain of the network team. But others are and will continue to move to be the responsibility of DevOps.
Application delivery has come a long way in the past ten years. It’s evolved from a set of products to a unified platform, gained programmability and morphed from big iron to supporting hypervisors and cloud deployment. As mobile applications and microservices and DevOps continue to radically change how applications are built, deployed, and delivered, expect to see continued evolution of the services that ultimately deliver those applications and keep them fast, secure, and available.