BLOG | OFFICE OF THE CTO

Multiple Clouds Versus Multi-Cloud: Making it Easier to Bridge the Gap

Published October 07, 2021

Despite broad adoption of multi-cloud strategies in the enterprise, there remains a dearth of effective solutions that address the many challenges faced by organizations executing on them. One such challenge is the secure interconnection of workloads hosted across multiple providers—a problem which magnifies in intensity when more cloud vendors are added.

Of the majority (75%) of organizations deploying apps in multiple clouds, 63% use three or more clouds according to a Propeller Insights survey. Overall, more than half (56%) find it difficult to manage workloads across different cloud providers, citing challenges with security, reliability, and—generally—connectivity.

Some of this difficulty can largely be attributed to competing operational models. Each individual cloud offers services and respective APIs that are unique to the individual cloud provider—and often require customers to conform to different skillsets, policies, and approaches. Every cloud offers a “software-defined network” experience; but no two clouds offer the same “software-defined network” experience. This often leads to inconsistent configurations that affect security and performance when these cross-environment differences are not properly considered.

This difficulty with interconnectivity is heightened by the introduction of cloud-native (microservices-based) applications, significantly ballooning the number of instances that must cross-communicate. Propeller found that “over 70% of respondents say that security problems are exacerbated in multi-cloud environments by the differing security services between providers (77%), the growing number of APIs (75%), and the prevalence of microservices-based apps (72%).”

This difficulty is driving a need—and demand for—a new approach to multi-cloud networking.

The Challenge of Multi-Cloud Networking

Multi-cloud networking unifies two different approaches to simplifying application delivery:  

  1. It embraces software-defined internetworking from the bottom-up. Creating an overlay that abstracts the differences between networking environments and significantly simplifies the challenges of using multiple cloud environments together, the fixed physical infrastructure is used as a capable underlay with a standard cross-cloud control plane enabling dynamic, virtual networking atop it.
     
  2. It extends simple container networking into sophisticated distribution from the top-down. While the industry has begun to standardize on container workloads as a de facto application unit, the relatively unsophisticated networking underneath them must be extended toward other environments, marking the eventual emergence of a distributed cloud to assist in managing application traffic between environments.

The convergence of these two elements has already led to the creation of two layers of abstraction in customer application architectures—Kubernetes to facilitate network workload management and SDN to simplify internetworking. But the present way these two approaches converge still offers significant customer pain.

Many organizations experience a challenge with the way these technologies require operations to adopt overly granular configurations to obtain a standardized internetworking approach when multiple clouds are involved. The approach taken by one cloud provider—even for extremely simple networking tasks like VLAN management—is distinctly different than the approach taken by another… and both may be completely foreign to the approach taken by the Enterprise for its own private cloud efforts. The way in which networks are provisioned and managed across cloud properties often leads to the need to maintain a staff of experts in the respective environments' differences just to keep pace with network standardization.

Distributed Cloud as a Solution 

The struggle to manage these two solutions is intense enough; to add more than one cloud provider to this battle magnifies the intensity of the problem. Clearly, there are better ways to tackle this issue by moving Kubernetes and SDN closer together, solving for environmental differences, and removing the need to be a networking expert to make this all happen. At F5, we call this approach the “distributed cloud.”

Customers generally encounter this problem as their business decisions and application needs are weighed prior to selecting the “best network/cloud” for their service. This decision incorporates a variety of factors—cost, ability to launch, speed of deployment, or the need to be in a particular region… whatever factor the customer decides is critical to their application’s success. Rarely are network-side factors or interoperability with other clouds considered in the initial business decision. Unfortunately, this primes new challenges to occur as the application moves along its expected lifespan and other elements of the business make different decisions about cloud use.

At F5, we believe there is nothing inherently incorrect about the decisions made to use cloud technologies that are particularly suitable to business needs—even if it leads to use of multiple vendors or environments. We do not suggest that our customers should uniquely pursue the benefits of any particular cloud provider, but to instead aim to create commonality across all of them with build-to-scale solutions that are reasonable and within the reach of customers’ network skills, application needs, and business desires. We call our approach the “distributed cloud.”

Our approach is backed by three key beliefs:

  1. We understand that the network must support a model of anywhere, anytime, without the loss of quality or customer experience.
     
  2. We assert that any internetworking cloud should be simple, complete, and consistent no matter what underlying cloud our customer might choose.
     
  3. We believe that our customers should be able to get more value through simple, declarative, API-driven unification across control and management planes.

The distributed cloud model considers that the users of our customers’ applications must be served with the highest aspects of quality, performance, and security in near-real time. Our aim is to provide a distributed cloud that brings along the concepts of cross-cloud elasticity without massive cost increases, time constraints on provisioning, or environmental variances.

F5 has already created a broad portfolio of solutions to meet these critical moments head on by providing a congruent set of technologies and practices, but we are working hard to extend this to every application in our customers’ architectures. As part of our mission to move towards more Adaptive Applications, we intend to help customers complete these transitions to allow them to move workloads to the most efficient and effective locations, regions, or cost models with ease without employing a staff of network wizards for each environment.

Our customers' ability to use our technologies to build multi-cloud bridges at cloud scale with their on-prem critical workloads is what makes us unique. Our Volterra acquisition allows customers to bring the same fluency and capability to public cloud workloads and networking with a true common control and management plane, thus simplifying the internetworking challenge: no more building many teams across every cloud and managing many environment-specific APIs just to get connectivity running between them!

We continue to make this a priority: to provide simple, effective multi-cloud business solutions to our customers. We firmly believe that the solution to organizations’ entreaties of “I need it done now!” is truly within reach with F5’s technologies creating a better, faster, more secure—more distributed—cloud.

Learn how to move beyond managing multiple clouds and embrace multi-cloud networking