Hybrid load balancing refers to distributing client requests across a set of server applications that are running in various environments: on premises, in a private cloud, and in the public cloud. Hybrid load balancing maximizes the reliability, speed, and cost‑effectiveness of delivering content no matter where it is located, resulting in an optimum user experience.
To review general information about load balancers, see Save 80% Compared to Hardware Load Balancers.
Today many companies are migrating applications from on‑premises servers to the public cloud, to take advantage of benefits like lower costs and ease of scaling in response to demand. But a complete migration doesn’t usually happen overnight, and the cloud isn’t suitable for every application, so companies often have to manage a mix of on‑premises and cloud applications. For example, a company might use an Outlook email server that is installed and managed on premises by its internal IT team, but keep customer information in a cloud‑based CRM such as Salesforce.com, and host its ecommerce store on Amazon Web Services. With a hybrid load balancing solution, users access the applications through a single point of entry and the load balancer identifies and distributes traffic across the various locations.
Companies that have invested in private clouds face an even more complex situation, because they need to load balance across three resource locations. Like the public cloud, a private cloud is a virtual data center hosted offsite by a cloud vendor. It is unlike the public cloud in that it guarantees dedicated storage and computing power that is not shared with other customers of the cloud vendor.
Using a hybrid load balancing solution, companies can distribute traffic among on‑premises servers, private clouds, and the public cloud in a seamless manner so that every request is fulfilled by the resource that makes the most sense. The load‑balancing decision can be based on factors like the following:
Traditional load balancing solutions rely on proprietary hardware housed in a data center, and can be quite expensive to acquire, maintain and upgrade. Software‑based load balancers can deliver the performance and reliability of hardware‑based solutions at a much lower cost, because they run on commodity hardware.
Most companies follow best practice and deploy load balancers in the same environment as the resources they are load balancing: on premises for applications running in the data center and in the cloud for cloud‑hosted applications. Cloud infrastructure vendors typically do not allow customer or proprietary hardware in their environment, so companies that deploy hardware load balancers on premises still must use a software load balancer for cloud resources. That requires IT personnel to understand and maintain two different load balancing solutions. In contrast, the same software‑based load balancing solution, like NGINX and NGINX Plus, can be deployed both on premises and in the cloud, reducing operational complexity, costs, and the time it takes to develop and deploy applications.
NGINX Plus and NGINX are the best-in-class load‑balancing solutions used by high‑traffic websites such as Dropbox, Netflix, and Zynga. More than 350 million websites worldwide rely on NGINX Plus and NGINX Open Source to deliver their content quickly, reliably, and securely.
As a software load balancer, NGINX Plus is significantly less expensive than hardware solutions with similar capabilities. Furthermore, it can be easily deployed in a cloud infrastructure such as Amazon Elastic Cloud Compute (EC2) to load balance across resources in the public cloud along with on‑premises and private‑cloud resources.
To learn more about the benefits of using NGINX Plus to load balance your applications, download our ebook, Five Reasons to Choose a Software Load Balancer.