Predicting Cloud Workload Resource Requirements with Analytics

Best Practices for Modeling Workload Resource Demands

To effectively rightsize infrastructure for your workload—whether it is hosted in public clouds like AWS, Google, Azure, or in Kubernetes-based container environment—it is critical to build a predictive model of workload patterns across many key metrics.

For example, consider a workload current running on an AWS EC2 t2.medium instance with a recommended upsize to an AMD t3a.large. When analyzing historical usage patterns for this instance, you’ll want to have a detailed workload history across a variety of metrics like:

  • CPU utilization
  • Network I/O bytes
  • Network I/O receive bytes
  • Disk I/O bytes
  • Numerous other important data points

In Densify, this is based by default on an agentless audit of your infrastructure.

At this point, less-sophisticated approaches will generate demand predictions from peaks and averages or data rollups based on infrequent sampling.

With Densify, you can use our policy simulator to forecast CPU utilization across a predicted day, where every hour is broken out into mathematical quartiles based on five-minute samples.

Densify then leverages our policy engine to automatically run the predicted workload pattern against configurable thresholds, fit-for-purpose rules, a deep understanding of infrastructure capabilities, risk, tolerance, and other drivers, applying these as required based on the particular hosted application, the line of business, and other criteria to generate the final optimal resource recommendation.

The end result is that your workload is matched intelligently with the most cost-effective and optimally-performant infrastructure, mitigating risk and eliminating unnecessary waste across your public cloud and containerized environments.