Last week Cirba invited a group of about 20 senior IT workers, ranging from CIOs to heads of infrastructure and architects, to meet experts and discuss their views of the software-defined data center.
The debate, which took place at the prestigious Alain Ducasse restaurant in London’s Dorchester Hotel, covered a range of topics from visions of the future of the data center, the opportunities of policy-driven automation, the challenges of managing multiple IT architectures, capacity planning, SDN and the future of various cloud types. Expert speakers included Andrew Hillier, co-founder and CTO of Cirba, Alessandro Perilli, General Manager, Cloud Management Strategy, Red Hat and John Evans, Distinguished Engineer, Cisco.
Among the highlights:
There was a lively discussion on how quickly it made sense to move to cloud services with some attendees expressing a desire to be dominantly operating in the cloud in the near future while others maintained concerns over governance, security and legacy systems. However, most attendees seemed to anticipate a hybrid future where on-premise data centers commingle with private and public clouds and hosted workloads in co-location centers.
This brought us to a good conversation about infrastructure control and the opportunity to manage capacity in the same way that a hotel might handle room occupancy, across various IT platforms and architectures. Networking implications were also discussed, particularly in light of SDN and its ability to enable greater workload mobility by tearing down physical barriers that once dictated where workloads could be hosted. Systems for intelligent workload placement control was discussed as being essential as organizations make the shift to software-defined environments.
There was also discussion of how companies are moving to modernize their IT assets and take advantage of low-cost platforms like Amazon Web Services that bring agility and business alignment. As part of this, several speakers discussed how they communicated plans and managed the expectations of managers, business leaders and end-users. Also important, was how to make the decision of what to place where to ensure service levels and cost efficiencies. The more hosting options that become available makes the hosting decision more complex in order to ensure taking the best possible advantage of existing resources while also leveraging external ones.
The discussion reflected that many organizations today are only just taking advantage of the perfect storm of new technologies and services that will make running IT a slicker and more flexible affair than has traditionally been the case. Confusion still exists about how to best utilize all the possible options in order to gain the best result for large enterprise. But in general, there was a lot of optimism and excitement about the choices that exist today. Agreement was reached that there are many paths available, but the key to making the right decisions lies alignment with the goals and characteristics of the individual organization.
This week we announced support for KVM environments running OpenStack®. We are seeing a rise in popularity of this platform in our customer base as a secondary platform. This is an interesting trend as multiple cloud stacks in an environment means making the decisions of which platform a workload should be hosted on, then which environment is available to support that platform, and finally which host server it should come to reside on. Having the ability to model all workload demand in one system is critical to understand how much infrastructure is required, how it should be configured and where new workloads should be placed to take maximum advantage of available resources. Only Cirba does this.
Cirba’s analytics densify KVM environments by safely optimizing VM hosting, placement and sizing decisions. This is critical, even in KVM infrastructure, which some will see as a low cost alternative. But the reality is that it’s not low cost if the right management frameworks aren’t in place. The cost of excess hardware, software and performance issues that come from using real time load balancers to handle placement outweigh the cost of paying for the hypervisor. That’s where Cirba comes in.
“Cisco® is a big supporter of OpenStack and KVM as an alternative to more traditional choices. The richness of management solutions around OpenStack and KVM is of utmost importance to organizations that are considering this alternative. Cirba’s capabilities bring very sophisticated analytics and integrations that in many ways leapfrog the capabilities found in some of the more established offerings. Advancements like this make it even more likely that companies will deploy KVM in volume,” said Michael O’Gorman, Distinguished Engineer in the Chief Technology & Architecture Office & CTO of the Cloud & Virtualization Group at Cisco.
KVM is the most recent addition to the list of hypervisors Cirba supports, which includes VMware® ESX®, IBM® PowerVM®, Microsoft® Hyper-V® and Red Hat® Enterprise Virtualization (RHEV).
VMware’s vRealize Automation (vRA) was designed to automate the provisioning workflow surrounding new VMs. The solution performs three major functions for an organization:
Providing a self service portal for capturing new workload placement requests
Selecting an environment to start the VMs based on a round robin algorithm
Working with vRealize Orchestrator to automate the provisioning process and start the VM
We work with a number of organizations that are planning to adopt vRealize Automation (vRA) for their enterprise clouds while leveraging Cirba to make the routing decisions.
The reason they turn to Cirba for routing is simple. vRA relies on round-robin workload routing to choose host environments for workloads and this simplistic approach introduces risk. (see last week’s blog on the cost of bad routing decisions) We have talked to other organizations that have tried to route workloads manually, but these decisions are too complex to be made using spreadsheets. That’s where Cirba comes in.
Cirba integrates seamlessly to VMware vRealize Automation to provide intelligent automated demand management. Cirba optimizes VM routing decisions by evaluating detailed requirements vs. the capabilities of available infrastructures from a business, technical, policy and resource perspective. This ensures VMs are placed in host environments that can meet their requirements and if a suitable match isn’t found, you will know precisely why. Cirba also automatically reserves and holds capacity in the chosen environment to ensure the resources will be available when they are required.
As more organizations plan to deploy cloud management platforms (CMPs) like VMware vRealize Automation that will span multiple hosting environments, they start to examine in depth how they will determine which environments new workloads will get routed to. CMPs that do provide routing logic offer only very rudimentary approaches, such as round robin or random placement. But properly making these decisions is actually quite complicated, as they need to factor in the technical requirements of the workloads (think software licensing, storage type, network connectivity, etc), the business and operational policies (think service tiers, regulatory requirements, etc), the resource availability (think CPU and memory requirements, operational patterns, peak times/seasons) and relative cost.
Making the right decision of where to place a workload is important and the wrong decision can result in performance issues, unnecessary costs, and the need to deploy more infrastructure than is necessary. Although some of these areas may be obvious, others may not, and can significantly drive up the unit cost of hosting workloads in a cloud environment, making these private clouds less attractive to end users.
Stranding Resources: One of the more challenging questions to answer is how to best balance workloads across environments in an enterprise. Overloading a particular cluster or environment with memory intensive workloads can prematurely close an environment to new workloads, leaving other resources under-utilized. Storage is another common resource that can become prematurely exhausted, causing expensive compute resources to become unusable. Placing workloads with a view toward balancing demands can enable much more efficient use of capacity across a data center and the deferral of infrastructure purchases
Over-licensing software: Licensing models designed for virtual infrastructure enable organizations to license software on a per core or per processor basis, and if an entire host is licensed, then there is no limit on the number of VMs that can be run from a licensing perspective. This creates a significant opportunity, and by concentrating workloads requiring certain license types to certain environments and certain hosts, you can significantly reduce costs. Conversely, operating or planning environments without considering this factor can increase licensing costs significantly – we have found an average of 55% savings just through better VM placements.
Over-servicing workloads: Building fit-for-purpose infrastructure is the best way to ensure your workloads get access to the resources they need without over-servicing them. When booking a hotel, few people would book a penthouse if all they need is a regular room (unless somebody else is paying for it). Similarly not every workload requires access to top tier storage, or 99.999% availability. But many will take it if they have no way of knowing what they truly need (or if somebody else is paying for it). The better approach is to scientifically analyze app requirements against environment capabilities to find a home for each application that gives it access to just the right type of resource.
Under-servicing workloads: The flip-side of this double-edged sword is that under-servicing a workload can be costly as well. Matching workloads with environments built on the wrong types of infrastructure (e.g. NAS vs SAN vs SSD), with insufficient redundancy (e.g. N+1 HA, off-site replication), or with other fundamental issues is far more scary for organizations than over-servicing, but it can be a very real consequence of poor routing decisions. Bad routing decisions can cause all manner of application performance and availability problems, and no amount of monitoring and performance management will fix the problem.
Cost of non-compliance with business / regulatory policies. Non-compliance of regulatory policies can have a wide-range of associated costs, sometimes resulting in financial penalties (such as up to $500,000 for not being PCI compliant) to legal actions and even suspension of business operations. And although some of these constraints are fairly easy to deal with, others can be quite complex, and require more sophisticated policies. For example, certain users’ applications cannot reside on the same infrastructure (e.g. traders vs researchers), which means that where it goes is a more complex function of what it needs and what is already running in the target environments.
Rework: Last but not least, putting workloads into the wrong environment almost always results in costly “rework” to make things right. And this isn’t a simple matter of stopping VMs and starting them somewhere else – rework also incurs costs due to delayed access to the workload that was to be deployed, and requires significant manual effort to roll back and re-do the change management and service delivery processes. If the workload actually processed production transactions, then data snapshotting and migration may be needed, and end users will experience what is now a service interruption, not just a delay in initial access.
Given the potential risks, it’s important to invest time understanding how your chosen CMP handles VM routing. It’s not uncommon for organizations to turn to spreadsheets, inserting a manual step into the process of a user requesting capacity through a self service portal and offering automated provisioning. Not only does this go against the goal of being able to automate self-service, but it also doesn’t solve the problem at hand. Humans using spreadsheet lists of new requests cannot effectively match all the various requirements against the existing available infrastructures in enterprises, accounting for utilization levels, current workload placements, and the myriad of other factors that impact the decision.
The solution lies in applying purpose-built analytics that scientifically match all the requirements of the demand against the capabilities of infrastructure resources in available environments. This approach not only provides a low risk way of routing workloads, but it also enables automated access to capacity, which is one of the key goals of deploying a CMP in the first place.
To learn how Cirba enables intelligent, automated VM routing watch this short video.
The cloud promised to deliver faster access to capacity, automation and truly fit-for-purpose infrastructure. But a catalog and a self-service portal alone don’t fully constitute cloud. And what most organizations call cloud today really isn’t. In reality, most organizations haven’t achieved their cloud goals and don’t have a clear line of sight how to get there.
The management tooling available isn’t helping the matter. Today’s Cloud Management Platforms (CMPs) enable you to capture self-service requests and automate some of the provisioning process. But they don’t offer the single intelligent control pane you need to enable full automation, optimization or fit-for-purpose infrastructure.
According to Forrester analyst Lauren E. Nelson1, there are some key facts about private cloud strategies today that all tech management leaders should be aware of. These facts will ensure tech leaders develop plans that maximize value.
An upcoming license renewal can be a blessing or a curse. Unfortunately for many organizations, it typically means ever increasing costs with environment growth for popular software packages like operating systems or databases. Below is the story of how one organization leveraged Cirba to actually reduce their licensing while still leaving room for required growth.
With a Windows Server Datacenter edition license renewal approaching, the bank saw a high risk for significant cost increases. The processor-based licensing model could enable the bank to take advantage of economies of scale and run more VMs per licensed physical host, saving the organization on Windows Server licensing. Unfortunately, the bank had no way to determine whether they really required Windows Server licenses for the 4000 physical hosts that were currently licensed. Not only that, the cost issue was about to be exacerbated with environment growth and potential further sprawl throughout the data center.
Cirba’s Software-Defined Infrastructure Control was chosen by the bank to address the issue. The Software License Control module is part of the solution’s Control Console and enables organizations to optimize VM sizing and placements considering all the utilization, technical, business and operational requirements including software licensing. The bank recognized the value Cirba brought in terms of balancing application demand with infrastructure supply to increase efficiency and agility while reducing performance and operational risk. Due to tight renewal timelines, Microsoft Server software licensing optimization became the top priority.
Within a few short weeks, Cirba was deployed and the analysis was completed to identify optimal VM placements, which significantly reduced the required Windows Server footprint in the environment. Cirba accomplished this by isolating the licensed VMs from those not requiring the licenses and maximizing the density of licensed components on physical hosts. By leveraging Cirba to control VM placements on an ongoing basis, the bank ensured Windows VMs were contained to the licensed physical servers.
Using Cirba’s analytics the bank reduced its requirement from 4,000 to 3,400 licensed physical servers
The bank reduced its license requirement conservatively by just over 20% to allow for planned growth
The license savings totaled USD $5.5 million
The bank continues to use Cirba to automate VM sizing and rebalancing, ensuring continual risk mitigation, efficiency and software license optimization and containment.
Anyone who owns the SQL® Server licenses for their organization will know that Microsoft® made a change to standardize on core-based licensing for the enterprise edition with the 2014 release in April.
For many this represents a change and introduces uncertainty about how core-based licensing will impact their environment and of course, costs. Many applications available today offer this kind of licensing for virtualized infrastructure. Whether an application is licensed by core or by CPU socket, the net result is the same, enabling you to effectively license an entire host and run as many instances of the application as you want on it.
The key to making these kinds of licensing models work for you is having the ability to optimize VMs placements in order to minimize the number of physical hosts that need to be licensed. This can be a big challenge for many organizations that don’t have an intelligent workload placement engine and instead rely on balancing tools like VMware® DRS® to place workloads. Many of the tools out there claim to offer licensing optimization and when in reality, they are really just tracking or containing the workloads to the existing number of licensed servers. This doesn’t help you with the core problem of how to reduce your license requirement – now.
Optimize workload placements to both isolate licenses and increase VM density for the target license type. The net effect of this is an immediate reduction in the number of physical hosts requiring the licenses – on average 55%.
Avoid future sprawl and contain VMs requiring those licenses to those hosts during rebalancing. Cirba also routes new VMs to the right environment and physical host considering its licensing requirements.
The impact of placing VMs this way on software licensing costs is significant. One recent analysis done for a customer for Microsoft® SQL® Server Enterprise Edition reduced the license requirement by a total of 400 physical hosts. That’s big dollars for any organization!
In fact, Cirba has saved organizations an average of 55% on licensing requirements for software packages like Microsoft® SQL® Server, Microsoft® Windows® Server, Oracle® database, IBM® Websphere®, and CA® Application Performance Management (Wily®). In an enterprise environment that translates to millions in software licensing savings.
Virtual and cloud environments have opened up the possibility of moving to core-based or processor-based licensing, or what we refer to as host-based licensing. These models essentially permit the licensing of an entire physical host server upon which an unlimited number of instances can be run.
But buyer beware! Careful planning and controls are required in order to harness the potential of these models and reduce license costs. VM placements are key, but you don’t want to rely on just containment – that won’t reduce your costs today.
Download the tips guide below to learn what is required to really harness the potential efficiencies offered by these models and find immediate savings in your environment!
We are very pleased to announce that IBM has standardized on Cirba for is Private Modular Cloud (PMC) offering. PMC is IBM’s private on premise cloud solution. It is a packaging of hardware, software, system orchestration and management that enables an organization to stand up a customized cloud in less than a day.
Cirba enables organizations to reduce performance risk, increase VM density and efficiency and achieve unprecedented automation in private cloud. In the words of Will Padman, IBM’s Global Product Executive, Cloud Automation Services, IBM chose Cirba for its PMC offering because,
“Critical to effective private cloud operations is really the ability to balance infrastructure supply with application demand and Cirba is really the only solution in the marketplace today that actually does that.”
Watch this short video featuring Chuck Tatham, Cirba’s SVP of Business Development and Marketing and Will Padman, IBM’s Global Product Executive, Cloud Automation Services to learn more about PMC and how Cirba can be used to optimize those environments.
This week Cirba released a new infographic leveraging findings from analyst firm EMA’s recent survey of 235 infrastructure professionals. The infographic provides insight into:
• Top priorities in establishing software-defined infrastructure
• The progress organizations have made
• The key obstacles they’ve experienced as they work towards this goal
Watch this short video by Cirba president & CEO, Gerry Smith, to understand how Cirba grew from its roots in virtualization and transformation planning analytics to become the leader in infrastructure control for enterprise private cloud environments to today, providing organizations real control for the software-defined era.
We are very pleased to share a paper by analyst firm EMA that explores the challenges and approaches IT organizations need to take in order to achieve a software-defined operational state. In the words of EMA:
“EMA has identified the software-defined data center as one of the dominating trends in IT in 2014. However, in many cases, increased complexity in current IT environments, processes, and cultures has substantially impeded the organization’s ability to complete this transition.
EMA asked IT executives, IT operations staff, and business managers and executives of 235 organizations with highly mature IT departments that had deployed at least five SDDC-related technologies what they consider their most pressing IT challenges.”
This paper explores the results of that research and how technologies like software-defined infrastructure control can help.
Posted inProduct News|Comments Off on Insights from EMA on The Key to Empowering the Software-Defined Data Center
As we have mentioned in previous blog posts, organizations struggle with the decision of where to put their workloads. This doesn’t just apply at the server level, but it also very challenging to determine which environments new and existing applications should run in. Spreadsheets are commonly used and to be blunt, these homegrown models just aren’t up to the task.
In January, Cirba released The Reservation Console which enables organization to optimize and automate workload routing decisions. At VMworld 2014, we are very excited to announce that we are extending support for the Reservation Console and the Control Console to include Amazon Web Services and IBM Softlayer!
This means Cirba will be able to determine the best execution venue for applications, whether that is on internal or external infrastructure, while also providing management control and visibility across all enterprise workloads.
A lot of organizations are adopting VMware vCloud Automation Center (vCAC) and we often get asked how Cirba fits.
The two solutions are very complementary. vCAC provides a provisioning workflow that captures end user requirements for a VM, builds and registers the components and turns on the VMs.
Through our API, Cirba integrates seamlessly to VMware vCloud Automation Center to enable intelligent demand management (VM routing, capacity reservations and host level placements) and capacity supply optimization.
For the details on how the solutions work together, watch this short video by Andrew Hillier, Cirba CTO & Co-founder.