Thursday 8 December 2011

The un-utilization rate

The un-utilization rate? What’s the un-utilization rate?





Capacity Management in the cloud depends on knowing the headroom for growth, the capacity to handle new business, and leveraging the operational budget to efficiently deliver services. And whereas most capacity managers first focus is on the utilization of critical compute assets, the needs of the cloud enterprise are more accurately served by understanding the un-utilization rate, that is – the amount of capacity that is available to accommodate new business services or growing demands. And, while looking at utilization in terms of a sheer percentage provides some insight, understanding that capacity in a heterogeneous environment is not uniform is a fundamental consideration of the un-utilization rate. We can think of this new KPI in 3 ways:


1) The ‘smoothed peak’ utilization value (commonly expressed as a 95 or 98 percentile). This represents the maximum available capacity whilst disregarding the occasional spike that every component is bound to experience in normal operations.
 2) The amount of capacity available, and normalised into a common unit. For example, with network cards you may encounter 100Mbps, 1Gbps, 10Gbps hardware. For memory, you may encounter 32Gb, 64Gb, 128Gb. For CPU, you certainly need a model approach to account for the variety of chip architectures, scalability parameters and likely normalised against a common benchmark or set of benchmarks, such as SPEC_int.
 3) The maximum amount of capacity that can be used without performance degradation. For this, you either need a modelling approach or a wide set of engineering benchmarks that can be applied against all components in the estate. For example, a common benchmark for ethernet is that performance degradation begins at around 40%. For CPU, depending on the type of processor, you could use a maximum desired utilization threshold of 90%. The alternative is to apply a model approach that can represent different workloads. This allows you to distinguish between a batch workload – where CPU utilization of 100% is often desired – and a micro-transactional workload running on windows – where context switch overhead can become significant over 75%.




Applying these three aspects of capacity to our un-utilization rate, we can see that:


Unutilization rate = (max desired capacity – smoothed peak utilization rate) * common units


  In the cloud environment, two further effects should now be considered.


1) The temporal nature of workload, and that workload balancing requires that a macro-view of commodity capacity unutilization rate provides a single view to show the available headroom. This means that macro, platform and resource pool views are required to ensure a holistic view to capacity management.


2) The combination effects of varying workloads mean that individual workloads are difficult to break down. In a cloud environment, capacity is made available quickly and dynamically to allow business users to adjust to varying levels of demand. But from a provider’s perspective, as one workload waxes and another wanes, the change in the amount of underlying capacity can change independently of individual workloads, but rather depends on the combination of workloads and trends. For this reason, advanced forecasting should be made through regression analysis of combined workloads and highly dependent also on business growth forecasts.


Using these effects then predictive analytics can be applied based on perceived change of organic growth and in a highly consultative manner with the business owner. In a private cloud, the business owner is the CIO – the position of accountability for running the cloud at a profit, and ensuring sufficient capacity to all customers.

Finally, our recommendation is that all cloud providers tightly correlate their financial management controls with capacity management routines, in order to understand the cost of capacity provision and the efficacy of the leveraging of those investments. This is where the unutilization rate vitally adds significant business value. By taking a polarized view of capacity in sheer financial terms, the unutilization rate provides both the ability to safely downsize capacity and reduce variable operational expense – and also to carefully and deliberately manage the capacity to accommodate new and growing workloads as the cloud business expands.

Thursday 25 August 2011

SAP Capacity Management - 3 top tips

The result from good SAP Capacity Management is driving down hardware costs whilst maintaining good performance.  As I argue strongly, if you can directly correlate cost and capacity (as I have proved on www.limsol.com/osprey) then capacity management = cost management (as discovered by Bruce Robertson of Gartner at http://blogs.gartner.com/bruce-robertson/2009/07/03/capacity-planning-equals-budget-planning/).  Therefore selecting the optimium capacity is about balancing cost and performance (or more accurately service levels).  Taken into the cloud model, these variations in the marketplace offer as distinct a choice as Bentley and Trabant offer in the automotive marketplace.

Tip 1: SAP Sizing
With SAP, the opportunity is clear.  The current sizing process based on SAPS is certainly not fit for purpose, and is far too vendor reliant to detect variations in value proposition between different hardware manufacturers.  Adding greater scientific rigour to the sizing process can right-size, right-cost and environment with controlled aggression in risk management.  Knowing your options can be valuable.  In this sense, Hyperformix (now CA Capacity Management) claim $5M in SAP savings at a high-profile petrochemical company. 

Tip 2: cost control & billing
Let's be clear.  A fundamental aspect of cloud computing is the transparency of cost.  As I've argued, ifcapacity correlates directly to cost, therefore we can say a fundamental aspect of cloud computing is the transparency of capacity.  Regressive engineering leads to sizing and billing based on transaction volumes & user load.  By mining the SAP and java stack, a full transaction profile can be married with a cost profile for cost control and billing.  By siezing control of this information, competitive SAP service providers will undercut market rates and provide flexible billing and deployment models.

Tip 3: be selective with publishing information
One man's meat is another man's poison.  Therefore selective propogation of capacity/cost/performance information is necessary.  For business/development/operations to work in harmony, then synergetic relationships must be found and nurtured.  Maintaining a conversation about overall cost/benefit to couple with new releases, workload and operational environment changes can only improve the planning and delivery processes.

Wednesday 25 May 2011

It's a competitive market, this cloud business. What makes the difference between success and failure?

I was preparing a training event for Capacity Managment recently, and came across this excellent article on the CIO website: Cloud Makes Capacity Planning Harder - 3 Fight Back Tips.  It strikes me that one aspect of cloud computing that represents the paradigm shift more than any attempts (and there have been many) to characterize cloud models, is that suddenly there is a competitive market for the provision of IT services.  No longer does an enterprise remain constrained by internal IT services, and IT Executives are being asked to support external services as well as their own.

Not since the major transition from silos to shared services has there been such a significant change, and challenge to CIOs.  The challenge from the CEO is this - become more competitive, or risk losing your business to external providers.  As the competition grows stronger, internal IT providers struggle harder and harder to justify their selection in an open marketplace.  The birth of the private cloud model is the attempt of the CIO to respond to the threat of the external providers.

Any marketplace is defined by offerings which differ in cost and quality.  And the cloud computing marketplace is no different.  Market segmentation into SaaS, PaaS and IaaS offerings allow niche suppliers to prosper alongside larger vendors.  The cost of service provision is transparent and elastic to meet demands. 

The CIO article somewhat skirts around this topic, advising to develop competitive chargeback rates, and a meaningful mixed deployment strategy.  However, ultimately the cloud marketplace will demand choice in cost and quality; meaning Capacity Management and Capacity Chargeback will be key differentiators.  The challenge to private cloud is to provide a meaningful and transparent option to compete.