Will you have visibility across all your enterprise workloads – hosted internally and externally? Do you really know where they should run? When to re-size? When to bring back in house or change hosting environments?
A lot of organizations we have been speaking with have been struggling with how to embrace public cloud. Two key questions are top of mind in those conversations. 1. How can I automatically make the best decisions where to host workloads? And 2. How can I continue to manage workloads that do go outside my four walls?
Having a centralized, policy-based control system for hybrid cloud that provides the necessary checks and balances is critical. Determining where a workload should be hosted and how resources are allocated are fundamental, and providing automation and governance around this is at the core of cloud operational models.
Today Cirba made new integrations available to Microsoft® Azure™, Amazon Web Services (AWS) and IBM®’ SoftLayer® that provide centralized management for enterprise applications across hybrid cloud environments. Cirba customers will now have extended visibility into applications that are hosted externally and whether they are appropriately resourced. They will also be able to assess these applications against on-premise hosting environments to determine whether they should be brought back in-house. With this release, Cirba extends its existing support for internal VMware vCenter, Microsoft Hyper-V, IBM PowerVM on AIX and Red Hat Enterprise Virtualization-based environments to external clouds so that customers can seamlessly manage hybrid cloud environments.
Cirba also automatically determines the best execution venue for applications in hybrid clouds that include Microsoft® Azure™, Amazon Web Services and IBM® SoftLayer® based on resource requirements, technical considerations, policy, security, compliance and costs.
In the words of our CTO & a co-founder, Andrew Hillier:
“Cirba provides the necessary decision control point for automatically determining where applications can safely run in hybrid environments. It is only through detailed analysis of application requirements against the security, cost, and technical capabilities of available public clouds and internal infrastructures that the best hosting environment can be chosen. Without analytics, organizations cannot automate the process nor can they effectively determine how to meet application requirements without risk or excessive cost.
Having a centralized policy-based control system for hybrid cloud that provides the necessary checks and balances is critical. Determining where a workload should be hosted and how resources are allocated is fundamental to modern IT infrastructure, and providing automation and governance around this is at the core of cloud and software-defined operational models.”
Decision Control for Routing to Public Cloud
Reservation Console hybrid routing capabilities
Visibility into allocation health for workloads in public clouds via Cirba
Cirba software-defined infrastructure control solution was recently recognized in Virtualization Review’s “2015 Editor’s Choice Awards.”
The prestigious accolade, which asks contributing editors to name which products they loved the most in 2015, was chosen by virtualization expert Dan Kuzentsky.
After hearing Dan’s explanation, it’s not hard to figure out why he bestowed the honor on Cirba:
“Cirba analyzes workload requirements and available resources, then suggests the best use of those resources. When IT checks in a new workload (x86 or Power architecture), its needs are analyzed and the best placement for the workload in the organization’s IT infrastructure is calculated. Cirba makes it possible for organizations to get more work done in the same IT infrastructure.”
Dan also went on to explain why he relies on Cirba over other solutions:
“VMware’s vRealize suite and VMTurbo’s Operations Manager, on the surface, offer similar capabilities. Those tools, however, are largely reactive; that is, they analyze the last “x” amount of resource utilization and make a decision to move VMs. Cirba has that capability, as well, but goes beyond it to add predictive analytics that can fit workloads together as if it was a software-defined Tetris game.”
With a fantastic 2015 behind us, we’re excited to help even more Global 2000 organizations realize their goals for hybrid cloud and next gen infrastructure. Stay tuned as we roll out exciting new announcements in the upcoming months.
For more on Cirba’s Software-Defined Infrastructure Control, visit Cirba.com.
Cirba recently announced integration to EMC® ViPR® SRM, enabling organizations to optimize use of storage resources through smarter workload placements and visibility into storage health.
Storage is critically important and at the same time incredibly expensive, so our customers are always looking for ways to make the best possible use of these assets. With this new release, Cirba’s analytics uniquely enable organizations to make better use of storage resources monitored by SRM by finding the best execution venue workloads based on multiple factors including detailed storage requirements, workload utilization, business, technical, and software licensing. These smarter workload placements keep the use of these resources balanced with the use of compute resources across the enterprise to free up stranded capacity. Cirba also provides virtual and cloud infrastructure management teams with visibility into when resource shortfalls might adversely affect associated VMs and where excess resources exist in the storage resources attached to EMC SRM.
We sat down with Cirba CTO & Co-Founder, Andrew Hillier, to discuss why storage requirements need to be considered when choosing where to host VMs and what the net benefit is for organizations that do this well.
As reported in Fortune magazine by Barb Darrow, Ben Fathi, former VMware® CTO, joined Cirba’s board of directors. Earlier this week, Mr. Fathi announced that he had taken on the position of head of engineering at CloudFlare, an Internet security and optimization company.
In Ben’s own words, “I am excited about joining Cirba’s board of directors. I first heard about Cirba from friends who were impressed with the product and the capex and opex savings it offered in their virtualized environments. Discussions with executives from the company as well as other members of the board of directors convinced me that they have a winning strategy and a great product. I look forward to working with Gerry and the team as they define the next generation of products and services from Cirba.”
We are thrilled to be working with Ben. He has spent the last three decades at industry giants including VMware®, Cisco®, Microsoft® building products that power the infrastructures of the Fortune 1000. In short, he knows our customers and the challenges they face.
As more diverse virtual and cloud hosting options become available, what organizations really require is intelligent automation that is driven by analytics, with a complete world view that includes detailed workload requirements and deep awareness of the capabilities of available infrastructures.
Cirba adds this intelligence to cloud management platforms including VMware vRealize Automation, OpenStack and now IBM Cloud Orchestrator. Adding Cirba to the automated management and provisioning functions of IBM Cloud Orchestrator provides incredibly powerful software-defined control for any organization looking to adopt private cloud.
To learn more watch this short video:
Did you also know that Cirba and IBM partner on a number of fronts?
Click on a solution to learn more:
“Where we see the future, where we see the greatest opportunity for us, is to move much more toward software-defined infrastructure.”
Bank of America CTO, David Reilly
Cirba customer, Bank of America, was recently interviewed by ZDNet on the topic of their software-defined infrastructure (SDI) effort. In the interview, the bank’s CTO, David Reilly, provides a fascinating overview of how the bank views SDI and why it’s so important to remaining competitive as infrastructure becomes commoditized.
Referring to the bank’s SDI initiative as “Project Greenfield”, Mr. Reilly states that:
“It’s really about application hosting… an application arrives with effectively a manifest ‘I need this much compute, this much storage, I need them this far apart physically for recovery purposes’ and all of that is provisioned dynamically…”
Mr. Reilly states the drivers behind this move include time to market, risk reduction and dramatically lower costs through driving higher density.
Last week Cirba invited a group of about 20 senior IT workers, ranging from CIOs to heads of infrastructure and architects, to meet experts and discuss their views of the software-defined data center.
The debate, which took place at the prestigious Alain Ducasse restaurant in London’s Dorchester Hotel, covered a range of topics from visions of the future of the data center, the opportunities of policy-driven automation, the challenges of managing multiple IT architectures, capacity planning, SDN and the future of various cloud types. Expert speakers included Andrew Hillier, co-founder and CTO of Cirba, Alessandro Perilli, General Manager, Cloud Management Strategy, Red Hat and John Evans, Distinguished Engineer, Cisco.
Among the highlights:
There was a lively discussion on how quickly it made sense to move to cloud services with some attendees expressing a desire to be dominantly operating in the cloud in the near future while others maintained concerns over governance, security and legacy systems. However, most attendees seemed to anticipate a hybrid future where on-premise data centers commingle with private and public clouds and hosted workloads in co-location centers.
This brought us to a good conversation about infrastructure control and the opportunity to manage capacity in the same way that a hotel might handle room occupancy, across various IT platforms and architectures. Networking implications were also discussed, particularly in light of SDN and its ability to enable greater workload mobility by tearing down physical barriers that once dictated where workloads could be hosted. Systems for intelligent workload placement control was discussed as being essential as organizations make the shift to software-defined environments.
There was also discussion of how companies are moving to modernize their IT assets and take advantage of low-cost platforms like Amazon Web Services that bring agility and business alignment. As part of this, several speakers discussed how they communicated plans and managed the expectations of managers, business leaders and end-users. Also important, was how to make the decision of what to place where to ensure service levels and cost efficiencies. The more hosting options that become available makes the hosting decision more complex in order to ensure taking the best possible advantage of existing resources while also leveraging external ones.
The discussion reflected that many organizations today are only just taking advantage of the perfect storm of new technologies and services that will make running IT a slicker and more flexible affair than has traditionally been the case. Confusion still exists about how to best utilize all the possible options in order to gain the best result for large enterprise. But in general, there was a lot of optimism and excitement about the choices that exist today. Agreement was reached that there are many paths available, but the key to making the right decisions lies alignment with the goals and characteristics of the individual organization.
This week we announced support for KVM environments running OpenStack®. We are seeing a rise in popularity of this platform in our customer base as a secondary platform. This is an interesting trend as multiple cloud stacks in an environment means making the decisions of which platform a workload should be hosted on, then which environment is available to support that platform, and finally which host server it should come to reside on. Having the ability to model all workload demand in one system is critical to understand how much infrastructure is required, how it should be configured and where new workloads should be placed to take maximum advantage of available resources. Only Cirba does this.
Cirba’s analytics densify KVM environments by safely optimizing VM hosting, placement and sizing decisions. This is critical, even in KVM infrastructure, which some will see as a low cost alternative. But the reality is that it’s not low cost if the right management frameworks aren’t in place. The cost of excess hardware, software and performance issues that come from using real time load balancers to handle placement outweigh the cost of paying for the hypervisor. That’s where Cirba comes in.
“Cisco® is a big supporter of OpenStack and KVM as an alternative to more traditional choices. The richness of management solutions around OpenStack and KVM is of utmost importance to organizations that are considering this alternative. Cirba’s capabilities bring very sophisticated analytics and integrations that in many ways leapfrog the capabilities found in some of the more established offerings. Advancements like this make it even more likely that companies will deploy KVM in volume,” said Michael O’Gorman, Distinguished Engineer in the Chief Technology & Architecture Office & CTO of the Cloud & Virtualization Group at Cisco.
KVM is the most recent addition to the list of hypervisors Cirba supports, which includes VMware® ESX®, IBM® PowerVM®, Microsoft® Hyper-V® and Red Hat® Enterprise Virtualization (RHEV).
VMware’s vRealize Automation (vRA) was designed to automate the provisioning workflow surrounding new VMs. The solution performs three major functions for an organization:
Providing a self service portal for capturing new workload placement requests
Selecting an environment to start the VMs based on a round robin algorithm
Working with vRealize Orchestrator to automate the provisioning process and start the VM
We work with a number of organizations that are planning to adopt vRealize Automation (vRA) for their enterprise clouds while leveraging Cirba to make the routing decisions.
The reason they turn to Cirba for routing is simple. vRA relies on round-robin workload routing to choose host environments for workloads and this simplistic approach introduces risk. (see last week’s blog on the cost of bad routing decisions) We have talked to other organizations that have tried to route workloads manually, but these decisions are too complex to be made using spreadsheets. That’s where Cirba comes in.
Cirba integrates seamlessly to VMware vRealize Automation to provide intelligent automated demand management. Cirba optimizes VM routing decisions by evaluating detailed requirements vs. the capabilities of available infrastructures from a business, technical, policy and resource perspective. This ensures VMs are placed in host environments that can meet their requirements and if a suitable match isn’t found, you will know precisely why. Cirba also automatically reserves and holds capacity in the chosen environment to ensure the resources will be available when they are required.
As more organizations plan to deploy cloud management platforms (CMPs) like VMware vRealize Automation that will span multiple hosting environments, they start to examine in depth how they will determine which environments new workloads will get routed to. CMPs that do provide routing logic offer only very rudimentary approaches, such as round robin or random placement. But properly making these decisions is actually quite complicated, as they need to factor in the technical requirements of the workloads (think software licensing, storage type, network connectivity, etc), the business and operational policies (think service tiers, regulatory requirements, etc), the resource availability (think CPU and memory requirements, operational patterns, peak times/seasons) and relative cost.
Making the right decision of where to place a workload is important and the wrong decision can result in performance issues, unnecessary costs, and the need to deploy more infrastructure than is necessary. Although some of these areas may be obvious, others may not, and can significantly drive up the unit cost of hosting workloads in a cloud environment, making these private clouds less attractive to end users.
Stranding Resources: One of the more challenging questions to answer is how to best balance workloads across environments in an enterprise. Overloading a particular cluster or environment with memory intensive workloads can prematurely close an environment to new workloads, leaving other resources under-utilized. Storage is another common resource that can become prematurely exhausted, causing expensive compute resources to become unusable. Placing workloads with a view toward balancing demands can enable much more efficient use of capacity across a data center and the deferral of infrastructure purchases
Over-licensing software: Licensing models designed for virtual infrastructure enable organizations to license software on a per core or per processor basis, and if an entire host is licensed, then there is no limit on the number of VMs that can be run from a licensing perspective. This creates a significant opportunity, and by concentrating workloads requiring certain license types to certain environments and certain hosts, you can significantly reduce costs. Conversely, operating or planning environments without considering this factor can increase licensing costs significantly – we have found an average of 55% savings just through better VM placements.
Over-servicing workloads: Building fit-for-purpose infrastructure is the best way to ensure your workloads get access to the resources they need without over-servicing them. When booking a hotel, few people would book a penthouse if all they need is a regular room (unless somebody else is paying for it). Similarly not every workload requires access to top tier storage, or 99.999% availability. But many will take it if they have no way of knowing what they truly need (or if somebody else is paying for it). The better approach is to scientifically analyze app requirements against environment capabilities to find a home for each application that gives it access to just the right type of resource.
Under-servicing workloads: The flip-side of this double-edged sword is that under-servicing a workload can be costly as well. Matching workloads with environments built on the wrong types of infrastructure (e.g. NAS vs SAN vs SSD), with insufficient redundancy (e.g. N+1 HA, off-site replication), or with other fundamental issues is far more scary for organizations than over-servicing, but it can be a very real consequence of poor routing decisions. Bad routing decisions can cause all manner of application performance and availability problems, and no amount of monitoring and performance management will fix the problem.
Cost of non-compliance with business / regulatory policies. Non-compliance of regulatory policies can have a wide-range of associated costs, sometimes resulting in financial penalties (such as up to $500,000 for not being PCI compliant) to legal actions and even suspension of business operations. And although some of these constraints are fairly easy to deal with, others can be quite complex, and require more sophisticated policies. For example, certain users’ applications cannot reside on the same infrastructure (e.g. traders vs researchers), which means that where it goes is a more complex function of what it needs and what is already running in the target environments.
Rework: Last but not least, putting workloads into the wrong environment almost always results in costly “rework” to make things right. And this isn’t a simple matter of stopping VMs and starting them somewhere else – rework also incurs costs due to delayed access to the workload that was to be deployed, and requires significant manual effort to roll back and re-do the change management and service delivery processes. If the workload actually processed production transactions, then data snapshotting and migration may be needed, and end users will experience what is now a service interruption, not just a delay in initial access.
Given the potential risks, it’s important to invest time understanding how your chosen CMP handles VM routing. It’s not uncommon for organizations to turn to spreadsheets, inserting a manual step into the process of a user requesting capacity through a self service portal and offering automated provisioning. Not only does this go against the goal of being able to automate self-service, but it also doesn’t solve the problem at hand. Humans using spreadsheet lists of new requests cannot effectively match all the various requirements against the existing available infrastructures in enterprises, accounting for utilization levels, current workload placements, and the myriad of other factors that impact the decision.
The solution lies in applying purpose-built analytics that scientifically match all the requirements of the demand against the capabilities of infrastructure resources in available environments. This approach not only provides a low risk way of routing workloads, but it also enables automated access to capacity, which is one of the key goals of deploying a CMP in the first place.
To learn how Cirba enables intelligent, automated VM routing watch this short video.
The cloud promised to deliver faster access to capacity, automation and truly fit-for-purpose infrastructure. But a catalog and a self-service portal alone don’t fully constitute cloud. And what most organizations call cloud today really isn’t. In reality, most organizations haven’t achieved their cloud goals and don’t have a clear line of sight how to get there.
The management tooling available isn’t helping the matter. Today’s Cloud Management Platforms (CMPs) enable you to capture self-service requests and automate some of the provisioning process. But they don’t offer the single intelligent control pane you need to enable full automation, optimization or fit-for-purpose infrastructure.
According to Forrester analyst Lauren E. Nelson1, there are some key facts about private cloud strategies today that all tech management leaders should be aware of. These facts will ensure tech leaders develop plans that maximize value.
An upcoming license renewal can be a blessing or a curse. Unfortunately for many organizations, it typically means ever increasing costs with environment growth for popular software packages like operating systems or databases. Below is the story of how one organization leveraged Cirba to actually reduce their licensing while still leaving room for required growth.
With a Windows Server Datacenter edition license renewal approaching, the bank saw a high risk for significant cost increases. The processor-based licensing model could enable the bank to take advantage of economies of scale and run more VMs per licensed physical host, saving the organization on Windows Server licensing. Unfortunately, the bank had no way to determine whether they really required Windows Server licenses for the 4000 physical hosts that were currently licensed. Not only that, the cost issue was about to be exacerbated with environment growth and potential further sprawl throughout the data center.
Cirba’s Software-Defined Infrastructure Control was chosen by the bank to address the issue. The Software License Control module is part of the solution’s Control Console and enables organizations to optimize VM sizing and placements considering all the utilization, technical, business and operational requirements including software licensing. The bank recognized the value Cirba brought in terms of balancing application demand with infrastructure supply to increase efficiency and agility while reducing performance and operational risk. Due to tight renewal timelines, Microsoft Server software licensing optimization became the top priority.
Within a few short weeks, Cirba was deployed and the analysis was completed to identify optimal VM placements, which significantly reduced the required Windows Server footprint in the environment. Cirba accomplished this by isolating the licensed VMs from those not requiring the licenses and maximizing the density of licensed components on physical hosts. By leveraging Cirba to control VM placements on an ongoing basis, the bank ensured Windows VMs were contained to the licensed physical servers.
Using Cirba’s analytics the bank reduced its requirement from 4,000 to 3,400 licensed physical servers
The bank reduced its license requirement conservatively by just over 20% to allow for planned growth
The license savings totaled USD $5.5 million
The bank continues to use Cirba to automate VM sizing and rebalancing, ensuring continual risk mitigation, efficiency and software license optimization and containment.
Anyone who owns the SQL® Server licenses for their organization will know that Microsoft® made a change to standardize on core-based licensing for the enterprise edition with the 2014 release in April.
For many this represents a change and introduces uncertainty about how core-based licensing will impact their environment and of course, costs. Many applications available today offer this kind of licensing for virtualized infrastructure. Whether an application is licensed by core or by CPU socket, the net result is the same, enabling you to effectively license an entire host and run as many instances of the application as you want on it.
The key to making these kinds of licensing models work for you is having the ability to optimize VMs placements in order to minimize the number of physical hosts that need to be licensed. This can be a big challenge for many organizations that don’t have an intelligent workload placement engine and instead rely on balancing tools like VMware® DRS® to place workloads. Many of the tools out there claim to offer licensing optimization and when in reality, they are really just tracking or containing the workloads to the existing number of licensed servers. This doesn’t help you with the core problem of how to reduce your license requirement – now.
Optimize workload placements to both isolate licenses and increase VM density for the target license type. The net effect of this is an immediate reduction in the number of physical hosts requiring the licenses – on average 55%.
Avoid future sprawl and contain VMs requiring those licenses to those hosts during rebalancing. Cirba also routes new VMs to the right environment and physical host considering its licensing requirements.
The impact of placing VMs this way on software licensing costs is significant. One recent analysis done for a customer for Microsoft® SQL® Server Enterprise Edition reduced the license requirement by a total of 400 physical hosts. That’s big dollars for any organization!
In fact, Cirba has saved organizations an average of 55% on licensing requirements for software packages like Microsoft® SQL® Server, Microsoft® Windows® Server, Oracle® database, IBM® Websphere®, and CA® Application Performance Management (Wily®). In an enterprise environment that translates to millions in software licensing savings.
Virtual and cloud environments have opened up the possibility of moving to core-based or processor-based licensing, or what we refer to as host-based licensing. These models essentially permit the licensing of an entire physical host server upon which an unlimited number of instances can be run.
But buyer beware! Careful planning and controls are required in order to harness the potential of these models and reduce license costs. VM placements are key, but you don’t want to rely on just containment – that won’t reduce your costs today.
Download the tips guide below to learn what is required to really harness the potential efficiencies offered by these models and find immediate savings in your environment!
We are very pleased to announce that IBM has standardized on Cirba for is Private Modular Cloud (PMC) offering. PMC is IBM’s private on premise cloud solution. It is a packaging of hardware, software, system orchestration and management that enables an organization to stand up a customized cloud in less than a day.
Cirba enables organizations to reduce performance risk, increase VM density and efficiency and achieve unprecedented automation in private cloud. In the words of Will Padman, IBM’s Global Product Executive, Cloud Automation Services, IBM chose Cirba for its PMC offering because,
“Critical to effective private cloud operations is really the ability to balance infrastructure supply with application demand and Cirba is really the only solution in the marketplace today that actually does that.”
Watch this short video featuring Chuck Tatham, Cirba’s SVP of Business Development and Marketing and Will Padman, IBM’s Global Product Executive, Cloud Automation Services to learn more about PMC and how Cirba can be used to optimize those environments.