BLOG | OFFICE OF THE CTO

Microservices: Less Micro and More Services

Lori MacVittie サムネール
Lori MacVittie
Published October 01, 2018

Service-oriented Architecture (SOA) was declared dead nearly ten years ago. A contributing - but rarely discussed - factor in its demise was the network. Latency between services prevented architects from fully decomposing applications into services with the granularity needed to encourage reuse and, ultimately, composable applications.

Enter Microservices Architecture (MSA). Its principles demand even greater decomposition, with a focus on function (verbs) over object (nouns) as the primary criterion for divvying up an application. From this seemingly subtle change in focus comes greater granularity of services; there are many more functions than there are objects in any given system.

The network is ready. Speeds and feeds of the physical network have increased dramatically. Compute, too, has advanced in accordance with Moore's Law and rendered networking latency almost a non-issue.

Unfortunately, communication latency will take its place.

We have replicated in the interior of the container environments used to deploy microservices the Internet's complexity.  While a microservice may not need DNS, it still relies on the same kind of name-based resolution that runs the Internet. Application tags - metadata - must be translated to an IP address. Service registries and complex IP tables entries act as miniature DNS, translating tags to addresses and enabling communication between services.

Exacerbating the latency associated with this process is the ephemeral nature of microservices and their associated containers. With lifetimes measured in seconds or minutes instead of hours or months, name resolution must occur with every call. Time to live (TTL) inside the container world is, effectively, zero.

Even if we ignore this reproduction of one of the biggest sources of communication latency, we are left with that associated with TCP. It is not - nor ever has it been - free to initiate or tear down a TCP connection. This source of latency is certainly small but absolutely additive. Each connection - each microservice - required to execute a single transaction adds latency that eventually breaches tolerance for delay.

HTTP/2, despite its dramatic changes in behavior, do not address this problem. HTTP/2 is designed to facilitate the transfer of multiple objects over the same connection, thereby reducing latency for multi-object content such as web-pages and web-based applications. Microservices are ideally designed such that each service returns a single response. While multiple requests over an established connection will certainly reduce communications overhead, it cannot do so in a system where multiple-requests are distributed across multiple discrete services.

The problem is, then, not network latency but communication latency. Connections still count, and improvements in protocols designed to enhance performance of web-based, multi-transactional communications will not help multi-service transactions.

The result is SOMA. Service Oriented Micro Architectures. A strange hybrid of Service-Oriented and Microservice architectures that leaves one wondering where one ends and the other begins. Decomposition of applications into its composite function-based services is constrained by communication latency and ultimately, sustainability of the code base. While certainly network advances have increased the granularity with which decomposition can be reasonably accomplished, it has not eliminated the constraint. Also a factor is the that there are orders of magnitude more functions in an application than objects, That makes the task of managing a pure microservices architected application somewhat of a logistical nightmare for network operations let alone app developers. Combined with the inherent issue raised by communications latency, organizations are increasingly developing object-oriented microservices instead of truly function-oriented microservices. 

This is ultimately why we see application decomposing beyond the traditional three-tier architecture, but not so far as to be a faithful representation of functional-based decomposition.

Until we address the latency inherent in connection based (TCP) communications - either with something new or by zeroing in on the system-level implementations - we will continue to be constrained to microservices architectures that are less micro and more services.