Back in 1983 when I was still learning how to peek and poke at hardware on my Apple ][e, a group of like-minded folks in the computer and telecom industries got together to create a detailed specification they called the Open Systems Interconnection (OSI). What began as an effort to hash out actual interfaces eventually morphed into a common reference model that in turn could be used by others – like the IETF – to develop interfaces. Those interfaces could – and did - eventually become standards: IP, TCP, HTTP, etc…
This reference model has been taught to most of us who trudged through computer science classes in college. We learned about the “seven layers of the OSI” only to discover in the real world that actual implementations rarely map cleanly to the OSI networking model.
Still, it maps well enough that we continue to use it as the reference model it was intended to be. Most of us understand that Layer 4 refers to TCP, Layer 7 to HTTP, and Layer 2/3 as IP and Ethernet. They’re nearly interchangeable.
A few years back we even got into debates about where, exactly, the overlay protocols associated with SDN and virtual networking belonged. They weren’t really layer 2, but they weren’t exactly layer 3, either. They were sort of in between.
We were able to ignore that, for the most part, and just waved vaguely at the two layers and referred to it merely as “overlay networking.” Everyone understood what we meant, and we had other things to argue about – like the definition of cloud and whether DevOps was appropriate for the enterprise or not.
Enter containers – or more precisely, container networking. The highly volatile and automated world of Container Orchestration Environments (COE) have given rise to the necessity for yet more layers in the networking stack.
Like overlay networking, we are disinclined to create new layers in the OSI model because, well, it’s a standard reference at this point and changing standards can take a long time. A. Long. Long. Time. But like overlay networking, these layers still exist as existential interfaces in the network stack. And like overlay networking, I am inclined to give them “half” layers because they are that important to the future of networking in COE.
The first ‘half-step’ lies between layers 4 and 5. This is where service mesh execution and automation come into play. In a nutshell, a service-mesh is built from side-car deployed proxies that intercept every request. This allows them to execute domain specific routing for services across the container environment. It assumes lower order protocols exist, and effectively extends them. This is necessary because all the network layers below it assume connectivity and routing are based solely on IP address. And while that’s ultimately howpackets get moved around in a container environment, the decision on which IP address and port to send a given request to is not based on that information. It’s based on a variety of variables related to the service and application status and location. Essentially, we’re looking at meta-information about a request and using it to determine how to route it. This meta-information is critical to establishing the “mesh” that in turn assures availability and scale of each service.
The second ‘half-step’ sits near the top, above layer 7. All jokes about the “human layer (layer 8)” aside, COE actually does place a layer of meta-data above the application that provides the ‘glue’ that makes scale in containerized environments work. These are the application or service “tags” used to identify discrete services for which the COE offers automated scale. Without the tags, it is nearly impossible to distinguish one app from another. This is because all the layers of the OSI stack are identifiable by specific constructs like IP address and port. While we’ve long understand architectural implementations that rely on sharing those constructs external to the environment (virtual servers and host-based networking) containers have created the same issues inside the environment. The sharing of ports and IP addresses makes it difficult to differentiate between services at the speeds required.
The addition of ‘tags’ at layer 7.5 in containerized environments affords those networking services (like load balancing and routing) the ability to uniquely identify resources and ensure scale and availability at the same time.
The new “container layers” allow the environment to decouple itself from networking constructs and, in the process, assure greater portability than previous technologies that remained tied tightly to other layers in the network stack. By operating at “half layers” and assuming the existence of the traditional layers, containerized environments gain independence from any specific networking scheme or architecture, and can move with equal ease between dev and test, test and production, on-premises and cloud.