WHITE PAPER

Data Centre Consolidation: Know Where You're Going and Why

Updated December 01, 2010
  • Share via AddThis

Introduction

Maintaining a piece of hardware is four to five times more costly than actually acquiring it. This will, of course, come as little surprise to CIOs and IT departments the world over who, particularly in times of economic stress, are focusing attention on reducing operational cost.

Saving money, whether on CapEx or OpEx, and keeping the business operational are entirely reasonable objectives. However, the true value of the IT department lies in its ability to make the wider business better — constantly striving to provide new applications and services that deliver real benefits for the individual and the business. Put it another way, the really great IT people use technology to make their businesses more fleet of foot, more agile and ultimately, more competitive.

To do so, IT must become more agile itself. Architectures must become more flexible to support the rapid integration and delivery of new applications and services, and more resilient, more secure and more efficient.

This is a tall order in today’s ‘more with less’ climate. And this is exactly why consolidation — whether stripping out hardware, consolidating software to eliminate single-purpose applications, or unlocking the application from the hardware with virtualisation — has so much appeal and has become such a priority.

Consolidation Uncovered

Consolidating any line-of-business process has a role to play here, potentially delivering these ‘must have’ savings in terms of infrastructure and human capital. But more, consolidation can encourage greater visibility and tighter control, assure more consistent and efficient workflows, and generally make the department or application easier to manage.

From a data centre perspective, an environmental angle can also be added. Reducing the number of sites should reduce power consumption. This will not only support the financial goals of the business but aid its Corporate and Social Responsibility agenda too.

However, consolidation (or virtualization for that matter) does not immediately solve the problem on the minds of many IT professionals today: that of running out of space.

Neither does it necessarily reduce cost OpEx or CapEx. Consolidating a host of application servers from branch offices is a major job.

And it is one that could very well cause application performance to drop away. Should this happen, remedial action will then be required — perhaps a new WAN optimization tool or a ‘quick’ re-architect of the network to bring that performance back. Similarly, chopping down the number of severs is all very well, but forgetting about that middle layer — proxy, caching and SSL — could leave an environment as least as complex as the one it meant to replace.

Security can suffer too. How do businesses ensure integrity and indeed cost-effective mirrors and geographic redundancy?

Added to this, enterprises may have applications that cannot be consolidated centrally for one reason or another. And for those that can be, there’s still the challenge of providing accessibility for hundreds or indeed thousands of remote employees who need to use these applications on a daily basis. Then there’s the issue of having to control it all.

So far from delivering a solution, consolidation and virtualization have just brought you more problems.

It is precisely this issue of adding kit and complexity that is pushing expected return on investment (ROI) dates back, and requires businesses to embark on a more strategic journey to consolidation — one that begins with the outcome and travels backwards.

And that outcome, F5 argues, must be the user experience and the accessibility of the application, irrespective of its hosted location. The true mark of success is a consolidation programme that has no impact (other than positive) on the user experience, simply allowing them to do their job more effectively.

For this, detailed planning is not enough. What’s needed is a fundamental shift in emphasis (and philosophy) from the network to the application. The network, while critical as a delivery mechanism, is the ‘how’ in this equation. The real question is ‘why?’ And the answer begins and ends with the value that accessing the particular application delivers to the business.

Pitfalls and Potential

In the very best of scenarios, the application is constantly available — whether from a consolidated or virtualized host. Costs have been reduced through rationalization of kit, human capital costs have fallen, and management of the server and application portfolio has been simplified.

In the very worst case, the reverse is true. Costs have risen and there’s more kit ‘optimizing’ the network. It’s more complex and requires additional management hours. And at the end of the process, users know something has changed because they can no longer access their application as they once did.

So, let’s go back to the beginning. We’ll take the CIO’s demand for cost reduction as read and focus here on the data manager’s driver for change: his lack of space. If consolidation cannot, without additional hardware investment, increase capacity then data managers can take the option of redistributing their estates — with all the onerous cost and management implications that come with it. They are then forced to look elsewhere, and increasingly upwards.

Going into the Cloud

Going into the cloud is today’s most preferred option to avoid the cost of redistributing access. Not only will it offer the opportunity for additional space, but 24x7 application availability, security, and business continuity. Crucially, the software as a service (SaaS) model also provides a hitherto unseen level of service flexibility, enabling capacity to grow and shrink on demand in a way not possible with manual systems.

Of course, there are many definitions of what the cloud is, but it’s F5’s view that rather than a dramatic move away from IT models, cloud simply offers an evolutionary ‘style’ of computing. It is, to our mind, an architecture that combines existing tools and automates and orchestrates various processes to achieve an infrastructure that users can simply ‘plug into’.

And it is this automation and orchestration, underpinned by a high level of intelligence at the application layer, which is absolutely critical to successful data consolidation strategies. It is not so much the capabilities of any particular vendor or product, but the way in which those solutions are enabled to communicate and coordinate between them; it is a symbiotic ecosystem.

For example, if a user makes a request of an application at a local data centre, rather than saying ‘no’ because the data centre has reached its capacity, this request becomes the catalyst for a whole set of processes to build and deploy a virtual machine into the network, and begin routing users to it.

It is the automation piece that triggers the creation of the VM while orchestration ties all the automated pieces together with, for example, a load balancer that is told the machine is ready to be serviced and users can be directed to it.

So, rather than changing how the data centre will look, the change will be how it is put together, with applications intelligence at its very core. And once this dynamic control layer is in place, realizing the costs, flexibility, and performance opportunities of cloud data centre services will be possible.

Virtualizing the Data Centre

Centralizing control over the data centre has as much to do with security as it does with application performance and accessibility. As we’ve already touched upon, business continuity, offering geographic redundancy, is critical — and not simply to assure compliance. Even here, the ability to reduce cost, and more, is significant.

Today, many enterprises build redundant sites as a backup and manually manage data replication and failover to the secondary site when needed. It is, in essence, an insurance policy. It’s also a non-performing asset. In an era of tenacious cost control and demands to do more with less, this is bad news.

By virtualizing data centre resources at both sites, it’s possible to turn these non-performers into ongoing available assets that will function in a distributed scenario to achieve maximum reliability and performance, regardless of location. For example, in an active-active data centre configuration, it’s possible to carry out data replication, upgrades, and maintenance on a more-frequent basis, increasing overall uptime and decreasing time-to-market for services.

Similar to the cloud model, virtualization also enables more flexible management of your server and applications portfolio. For example, virtualizing multiple, low-cost, high performance servers with applications and data will assure that should one server go down, the business is not impacted. This helps you achieve high availability and performance without the high cost of multiple physical sites.

Virtualization in the Round

So, with high availability achieved, activity can be refocused on getting the most from the data centre z effort. And here, IT must look to the layer between the switches and servers, where proxy, cache, and SSL assets abide. This layer can account for up to a third of the cost and complexity that data centre teams must manage. Getting virtualization right here is critical.

The problem is that despite consolidating servers into a blade environment, the hardware itself is often not virtualized. This means the density of virtual machines per blade is not as high as planned. Indeed, it’s not uncommon to have a single virtual machine running on a single blade, when the hardware is capable of supporting multiple machines.

This lack of dynamic movement and poor density is one of the major reasons why expected ROI from consolidation programmes is being pushed back.

The key is to virtualize this environment too; offload the cache (for example) from the hardware to allow a single blade to support multiple virtual machines. This way cost and complexity are dramatically reduced. Similarly, offloading will make the cache device available across multiple applications, encouraging best practice and enabling further cost reduction right across the environment.

Taken a step beyond, transitioning to a true dynamic services model will build intelligence into the infrastructure through strategic points of control. This offers reusable, integrated services for resources, applications, and clients by understanding the context of the user-resource interaction for a given environment, then taking appropriate action. It means you can offload existing services from the resources and deliver new services that were previously not possible.

Referencing Success

So what makes an effective data consolidation programme? The key point that cannot be stressed enough is to focus on the outcome — and a strategic outcome at that. Consolidating your data centre estate for the sake of short or medium term ROI is a blind alley, as many businesses have found out.

The complexities — whether it be applications that cannot be centralized, moving servers into a central location without due thought to the impact of application performance, or a failure to appreciate the man-hours such programmes demand — have resulted in delayed returns and a feeling that consolidation ‘over-promised and under-delivered’.

For F5, the crucial element is to focus on connecting users to the application rather than computers to data centres. It is the application, and how it is used, that provides the true value to the business.

As such, understanding that application, what it is doing, and how to make run in exactly the way it should must be the starting point.

Taking this more strategic approach is the only way that the strategic objectives of the IT department and the business can be realized — namely the performance, end-user experience, control and availability benefits. And it all begins with unchaining the application.

Moving away from a network-centric view is key. Traditional networking has no concept of the application. Sure, it will know what a port is, but will it have the intelligence to recognize the application users are connecting to? In many cases the answer is no. Consider Port 80, traditionally used for handling web traffic. It’s now initiating calls to thousands of applications but lacks the intelligence to discern the characteristics of each and every one. And this intelligence is critical to assuring optimum performance.

In response, F5 has developed a series of solutions that address these issues, giving customers control over their applications, and in doing so assuring the success of the data consolidation project.

The F5 Agenda

For over 10 years, F5 has helped the world’s largest enterprises and service providers create an agile IT infrastructure that aligns with their business demands. We have enabled these organizations to gain strategic points of control wherever information is exchanged — from client devices and the network to application servers, data storage, and everything in between.

Enabling effective, business-orientated data consolidation is a central tenant of the F5 philosophy and solution set. And we believe that by following a few simple best practice rules, organizations of all kinds are able to match expectation with reality:

  • Step One: Building a data centre consolidation strategy

    As discussed, focus on the destination, not the journey itself. The end goal of a consolidation project should be the delivery of applications that add true value to the business. Cost reduction is most definitely a worthy outcome, but it is not the outcome. Plan strategies with the application front of mind and operational cost reduction will be achieved along the way.

  • Step Two: Planning return on investment

    Getting data consolidation wrong will only add cost. CapEx and OpEx will rise as more appliances are added to maintain application performance. More devices means greater complexity, and that results in a greater need for system management. A more comprehensive and considered strategy, with clearly defined success indicators, is required to achieve ROI. And to avoid ‘over-promise and under-delivery’. Realistic expectations and timescales should be set at the very beginning of the project.

  • Step Three: Putting your strategy work

    Utilizing TMOS®, the universal, shared product platform at heart of all F5 products, assures networks become ‘application-aware’ from the very beginning (and lets F5 products communicate with one other). This allows for intelligent control of the network, which ensures applications are delivered exactly as intended and are protected from security threats, traffic congestion, and network failures.

  • Step Four: Achieving ROI

    With a comprehensive plan in place, a clear understanding of your expected returns, and the right enabling solutions from F5, payback will come from cost savings. Crucially, though, ROI will also be measured by the added value the business will achieve from the delivery of mission-critical applications that enable users to do their jobs more efficiently.

Final Thoughts

A consolidation strategy isn’t about bringing hardware and applications back from remote offices. That’s just what happens. So, if that’s the focus, you’re on a hiding to nothing. What you should be focusing on is how a centralized solution can provide a unified way to access those applications for ALL users — whether they be from a remote office, home office, or on the move.

Taking this wider view changes the way you look at the consolidation effort. And in doing so it provides unique measures of ROI for the business as a whole, solving multiple problems within the organization that go way beyond centralizing your kit.

To get here, it’s critical to benchmark the performance and usability of the application prior to consolidation. This allows you judge whether your efforts have delivered similar or improved performance. If the performance is comparable, your ROI is based simply on the lower OpEx of having the consolidated environment. Should you increase performance, then your ROI can also include efficiency metrics such as: solution reduces application access by x seconds, y times across z number of employees.

This can be taken against a payroll metric and represent real dollars of time that are now available for other work.

The F5 Difference

F5 is the only vendor that provides an open architectural framework, offering IT organizations new ways to deliver services that generate true business value. This unique framework includes:

  • F5 TMOS

    The universal, shared product platform that adds intelligence and complete control to application delivery.

  • F5 iRules

    The event-driving scripting language that customizes how application traffic is intercepted, inspected, transformed, and directed.

  • F5 iControl

    The open API that helps automate communications and eliminates the need for costly manual intervention.

  • F5 Data Management Operating System

    The core of F5’s scalable and intelligent file virtualization solutions that simplifies and automates data delivery.

And only F5 has DevCentral™, a business-driven social networking site where a community of tens of thousands of users share innovations in creating scalable, dynamic networks.