I’ve been writing a lot on the topic of the sometimes tumultuous relationship between app architectures and the network. For the most part, these have focused on how changes in the app architecture impact the network and the app services used to provide speed, scale, and security. Today, however, we’re going to turn that relationship around and look at how the network has a pretty significant impact on applications and, in turn, on innovation.
I was reminded of that by a recent post on High Scalability, in which its author illustrates why the network matters and how the evolution occurred – right up to today with serverless and why it’s possible to actually consider a world in which the Internet effectively is the computer. It’s long, but a good read, and I encourage you to take some time to read through it. I’ll sum up here, but there’s a lot I’m not hitting that you’ll find interesting in the source article.
Back in the days of dial-up access to the Internet, web sites were mostly text with perhaps one or two (low-quality) images. If you wanted something interactive you fired up gopher or telnet, and used a text-based terminal. There was simply no way the last-mile over dial-up provided for anything more complex.
As the speed of dial-up increased, eventually being replaced with the first “broadband” offerings, apps started to display more images and began to break up into multiple pages. Because the network was fast enough to transmit that information without the consumer getting bored and running off to play Diablo. This pattern continued until scale became an issue. It was no longer speed holding sites back, but scale. Load balancing was suddenly a gold mine.
Network speeds continued to increase – and not just the last-mile but inside the data center and along the Internet’s backbone. Web 2.0 introduced the notion of web apps to the world, giving us responsive, interactive web sites that took advantage of the network’s ability to ensure the scale and speed of data being exchanged.
Application architectures changed because of network advances. Without speed and scale, the world of Web 2.0 would never have been born, because it simply would not have satisfied the need for speed that is innate in ever consumer. But these apps were still of a traditional three-tier model, comprising a presentation layer, a logic layer, and the data layer. They were merely distributed across the Internet.
Soon after, SOA (Service Oriented Architectures for you youngins – get off my lawn, by the way) was all the rage. Using a combination of standards (SOAP, XML) and building on existing service-oriented concepts, “web services” took over. Web services and SOA introduced the concept of decomposing applications into individual services. If that sounds familiar, it should, because today we call that concept “microservices.”
The problem for web services was that XML is a beefy format and parsing it out on the client (or server) took time. Because XML was at the heart of SOA, this meant each service consumed X amount of time to exchange over the network and process. As there is a limited amount of time available in which to process a request from a consumer, this necessarily limited the number of services into which an application could be reasonably decomposed. Two or three services were the most one could hope to achieve.
Today, the networks are faster and fatter from end to end. Data center (and cloud) networks are measured in gigabits per second, not megabits per second, and even broadband connections would put the early corporate network speeds to shame. That means faster transfers over the network. Combined with incredible increases in compute and I/O speed (because Moore’s Law is right), applications have been able to decompose into tens and even hundreds of services that can be called and executed within expected response parameters. We call these microservices.
These changes in the network have enabled modern application architectures and APIs. It’s encouraged real-time exchange of information in a way that would never have been possible in the early aughts of the century. In much the same way technology is now considered to be a key component of business strategy rather than taking on its traditionally supportive role, the network is increasingly a key component of applications. As we watch the next wave of architectures rolling in (that’s serverless), we’ll note that without a highly responsive, integrated network and app service tier providing near instantaneous response to scale and security events, such computing models are unattainable.
It’s less now about the speed of the network (we’re reaching the limits of the speed of light) and more about the speed of the network to respond to events like scale up and down, stop an in-progress attack, or route around problems in the network or app infrastructure. The next generation of networking is software-defined, software-driven, and software-enabling. It’s also migrating toward a scalability model that embraces a just-in time approach requiring nearly instantaneous reaction speeds from services providing access, scale, and security to the services hosted in those containers.
“The network” as we tend to refer to it, is comprised of services residing in a variety of software and hardware. The ability of “the network” to respond and provide services in a just-in time model will, in part, determine the success of these emerging application architectural models.
“The network” has never been more important than it is right now.