Edge has always existed. Well, at least since the first wave of the Internet drove the need to solve the “last mile” problem. With consumers eagerness to explore the seemingly endless Internet hampered by unreliable, dial-up connections, the first iteration of edge emerged with a solution: move static content closer to the user.
Since then, two additional waves of Internet evolution have put pressure on edge computing to also evolve.
Each wave of the Internet eliminates obstacles to ubiquitous, real-time computing. Yet each wave introduces new challenges, too. F5 CTO Geng Lin covers this evolutionary path in more detail in his latest paper, “The Third Wave of the Internet.”
In that paper we come to the inescapable conclusion that, until recently, there’s been no need for a platform at the edge. Challenges with connections were solved by advances in networking. Application design and architecture readily adapted to cloud, but the growing digital economy attracted bad actors. Volumetric attacks disrupted business while malicious code and malware became a path to profit.
There was still no need for a platform at the edge because its evolutionary path was to protect business and applications by inserting services closer to the user. This meant bad actors were detected and neutralized before they could disrupt business or manage to breach a company’s defenses.
But today we’re riding the third wave of the Internet, and while it is bringing new capabilities it is also introducing new challenges. While broadband connectivity is nearly ubiquitous, the number of devices and users constantly communicating over the network still poses a performance challenge. Attackers have grown even more devious and seek to exploit the pervasiveness of applications and devices, as well as consumers' seemingly insatiable appetite for digital engagement.
The response to these challenges is the inevitable evolution of edge. But the only thing we can move closer to the user now are the apps and data they need to engage in digital activities.
Edge, as it has evolved, was not built to support the distribution of apps and data. The ability to support such capabilities requires a platform. An edge application platform.
One does not simply throw together such a platform. Bolting on the ability to deploy compute on existing edge networks does not fully address the challenges posed by the third wave of the Internet. Nor does it fully take advantage of one of the significant shifts in computing: the ability of devices and endpoints to participate in solutions.
Applications and devices are no longer passive receivers of information. They are active participants, often initiating connections and dictating decisions. Existing edge platform approaches are based on applications as passive receivers of information. A new approach is necessary to fully leverage the power of distributed compute.
That approach is one that ensures the need for security, scale, and speed of applications at the edge are met without sacrificing developer and operational experiences. It also requires attention to parallel trends in technology around observability and the use of AI and machine learning for business, security, and operational automation.
While broad characteristics, such as described by our CTO Geng Lin in his Edge 2.0 manifesto, provide overarching guidance for an Edge 2.0 platform, design considerations at the architectural level are also needed.
It’s easy to say that such a platform should be “secure by default” and “provide native observability” as well as “deliver autonomy,” but what do those mean in terms of technology and approaches that must be considered? More importantly, how should they be incorporated into an Edge 2.0 application platform?
These questions—and more—are answered in our latest paper, “Edge 2.0 Core Principles,” authored by Distinguished Technologist Rajesh Narayanan and Mike Wiley, F5 CTO, Applications.
The path to taking advantage of the evolution of the edge ecosystem is clear, and that path is through an Edge 2.0 application platform.