Why Saving Money in the Cloud Isn’t a Slam Dunk

A recent Cirba survey of 94 individuals from large enterprises found that only seventeen percent of organizations had achieved their density and ROI goals with virtualization. In addition, seventy percent of the respondents indicated that they planned to move to cloud operating models from existing virtual environments in order to achieve cost savings.

That all makes sense, if you believe that cloud operating models will deliver cost savings. Research completed by Cirba CTO Andrew Hillier last year showed that costs in external clouds add up quickly and in many cases, you are better off from a cost perspective to leverage internal infrastructure.

You might say then, lets look at filling up our internal cloud first. But internal clouds by their very nature can increase costs. Users with self-serve access to capacity more often than not act like diners at an all you can eat buffet, over-indulging in capacity. Often this is due to the desire to safe-guard against risk or simple lack of knowledge as to what is really required to service the workload. Pre-defined instance configurations and sized “buckets” of capacity may simplify management, but can also result in built-in excess capacity vs. custom allocations for each workload’s true requirement.

The biggest challenge in saving money with internal clouds however, lies in the challenge of increasing density over existing virtual environments. Internal clouds hold the promise of increased density as a result of sharing infrastructure across a broader base of users. This is a rational belief, but in practical terms you can only achieve higher utilization if you actually know how to increase density without putting workloads and performance at risk. Examining how successful organizations have been at managing utilization levels within purely virtual infrastructure would suggest they don’t.

Gartner analyst David Cappuccio recently commented in CIO that utilization in virtual infrastructure is stuck at 25% for most organizations. “Easily more than half of the clients we talk with have this situation. In fact, utilization numbers should be way higher, up around 55 to 60 percent, to gain the true economies of running virtualized applications…” he explains. So why do organizations expect to save money in the cloud, when their existing virtual infrastructure is potentially under-utilized?

Much of the focus always ends up on sizing workloads properly. Sizing is critical. Mapping workloads to the right-sized cloud instance ensures performance and the most efficient use of capacity. This requires analysis of the workload utilization profile and personality while factoring in service level requirements, operational requirements to find the best match within the instance catalog. Having an established approach and process to “sizing” and “matching” not only enables you to minimize waste, but it gives you ammunition to combat the buffet style capacity binge and show application owners why a particular instance option is best for their workload.

There is another critical factor that is often neglected. That is the impact of workload placements on how well utilized infrastructure is. If you want to maximize density, you need to strategically place workloads together on infrastructure. According to Gartner analyst Alessandro Perilli, in the June 9, 2011 research paper “The Big Mind Shift: Capacity Management for Virtual and Cloud Infrastructures,:

“Gartner defines “optimized” as a virtual infrastructure where the workload placement satisfies all of an organization’s technical, business, and compliance constraints and the capacity is allocated to avoid resource wasting (i.e., rightsized).”

If you think of it like a game of Tetris, it’s easier to see how placement is critical to making the best possible use of your infrastructure. If you fit the workloads together well considering their size (workload personalities and patterns), shape (policies and requirements that apply), and available space (capacity) then you can maximize use of the available capacity. A poorly played game of Tetris leaves a lot of empty space in the playing area and in the context of infrastructure this means wasted capacity. Things get even trickier when you have to factor in all the applicable policies that dictate where workload can go such as required service levels, privacy, security, operational and management, etc. to determine the best placements.

The reality is that sizing and placements are challenges in establishing both virtual and cloud infrastructure. Getting it right isn’t a new problem, but organizations migrating virtual infrastructure to hybrid clouds aren’t going achieve the big upfront savings on hardware that they realized with virtualization. Saving money with the cloud is going to be much harder and making good infrastructure choices will only get you so far. The real savings will only be won by organizations that figure out how to effectively plan and manage workload placements and infrastructure allocations so that policy requirements are met without giving up on efficiency.

This entry was posted in Cloud and Virtualization Management and tagged . Bookmark the permalink.

Comments are closed.