A few folk have been posting asking for advice on choosing a Capacity Management tool. The normal result of that is a flurry of responses from vendors trying to position their tool as the best. I prefer a different approach - some tools are better for some situations than others, and all of them will have some limitations - Capacity Management has to cover such a diverse range of platforms and use-cases that it's inevitable.
The fundamental principle of Capacity Management
As I tweeted recently, the fundamental principle of Capacity Management is that it exists to reduce cost and risk to business services. When we're designing that process, selecting a tool, or looking for skills - it is imperative that we keep hold of those guiding principles whenever making a decision. This means that taking a silo infrastructure view doesn't really help align with understanding or quantifying the risk to business services (unless the business service just happens to match 100% onto the infrastructure, which was the old way of running things but pretty much incompatible with the cloud)
The second principle of Capacity Management
Not all capacity is created equal. So it is critical that we abandon percentages as a way of comparing different assets. Percentages are only useful to compare a current value against another, such as the maximum. But you can't take on server at 20% and another running at 20% and assume that consolidating them will result in a server running at 40% (unless the configuration is identical). We now need a way of comparing capacities. One of the popular ways of doing that is through using MHz. But this approach is plagued with inaccuracy. The main issue is that the processing power of a CPU is not directly correlated with MHz. In fact (as I intend to show in a later post) the difference can be roughly stated as 25% per year for some chipsets. This means that a 2GHz chip from 2010 is roughly equivalent to a 1.5GHz chip from 2011. And don't forget to control for the effect of hyperthreading - your monitoring tools will report on logical CPUs not physical ones.
The third principle of Capacity Management
Cross-platform visibility is essential. There is no purpose in having silo'd capacity management. As indicated in the fundamental principle, it is imperative to identify risk to business services. This means that we need to get end-to-end visibility on all the components that could introduce risk to service quality; meaning storage, network, virtual, physical - and more. If you are operating within a silo right now, you need to explore ways to increase your scope and introduce a single, standard process - eliminating the variance in approach and in accuracy that is inevitable if you take a silo'd approach.
The fourth principle of Capacity Management
Operate within a limited set of use-cases. The natural implication of the third principle is that you can extend your approach to include every asset in a business service - even down the the capacity of things you can't measure. The fourth principle says that you should constrain the scope of your Capacity Management activities to align with your IT management objectives. You might be solely interested in optimizing your virtual estates, or you might be focused on a DevOps capacity management lifecycle. Your choice of tool and process should remain consistently aligned with your objectives.
The fifth principle of Capacity Management
Don't Capacity Manage in isolation! Remember that one of the benefits in Capacity Management is in managing risk, implying a connection with change. Change can be from a business perspective (new channels, better performance, new markets etc.) or an IT perspective (virtualizing, consolidating, upgrading, software release etc.). Your Capacity Management process must integrate with and add insight into these other critical management processes. Correlating with IT Financial Management will offer cost/efficiency benchmarking. Connecting with Release Management will provide scalability and impact assessments. Working with Business Continuity Management will give scenario planning and quantified risk mitigation. Capacity Management operating on its own just adds no value to the enterprise and should be terminated.
Summary
Capacity Management is a fundamental of business. Whether the assets in question are office cubicles, trucks in a fleet, or virtual servers - they all fulfil a business need and represent an investment on behalf of the business to meet that need. Enterprises operating at a lower risk and cost will enjoy a competitive advantage in the market - and Capacity Management is the vehicle to that advantage. Use the five guiding principles outlined in this posts, to have the very best chance of success.
The fundamental principle of Capacity Management
As I tweeted recently, the fundamental principle of Capacity Management is that it exists to reduce cost and risk to business services. When we're designing that process, selecting a tool, or looking for skills - it is imperative that we keep hold of those guiding principles whenever making a decision. This means that taking a silo infrastructure view doesn't really help align with understanding or quantifying the risk to business services (unless the business service just happens to match 100% onto the infrastructure, which was the old way of running things but pretty much incompatible with the cloud)
The second principle of Capacity Management
Not all capacity is created equal. So it is critical that we abandon percentages as a way of comparing different assets. Percentages are only useful to compare a current value against another, such as the maximum. But you can't take on server at 20% and another running at 20% and assume that consolidating them will result in a server running at 40% (unless the configuration is identical). We now need a way of comparing capacities. One of the popular ways of doing that is through using MHz. But this approach is plagued with inaccuracy. The main issue is that the processing power of a CPU is not directly correlated with MHz. In fact (as I intend to show in a later post) the difference can be roughly stated as 25% per year for some chipsets. This means that a 2GHz chip from 2010 is roughly equivalent to a 1.5GHz chip from 2011. And don't forget to control for the effect of hyperthreading - your monitoring tools will report on logical CPUs not physical ones.
The third principle of Capacity Management
Cross-platform visibility is essential. There is no purpose in having silo'd capacity management. As indicated in the fundamental principle, it is imperative to identify risk to business services. This means that we need to get end-to-end visibility on all the components that could introduce risk to service quality; meaning storage, network, virtual, physical - and more. If you are operating within a silo right now, you need to explore ways to increase your scope and introduce a single, standard process - eliminating the variance in approach and in accuracy that is inevitable if you take a silo'd approach.
The fourth principle of Capacity Management
Operate within a limited set of use-cases. The natural implication of the third principle is that you can extend your approach to include every asset in a business service - even down the the capacity of things you can't measure. The fourth principle says that you should constrain the scope of your Capacity Management activities to align with your IT management objectives. You might be solely interested in optimizing your virtual estates, or you might be focused on a DevOps capacity management lifecycle. Your choice of tool and process should remain consistently aligned with your objectives.
The fifth principle of Capacity Management
Don't Capacity Manage in isolation! Remember that one of the benefits in Capacity Management is in managing risk, implying a connection with change. Change can be from a business perspective (new channels, better performance, new markets etc.) or an IT perspective (virtualizing, consolidating, upgrading, software release etc.). Your Capacity Management process must integrate with and add insight into these other critical management processes. Correlating with IT Financial Management will offer cost/efficiency benchmarking. Connecting with Release Management will provide scalability and impact assessments. Working with Business Continuity Management will give scenario planning and quantified risk mitigation. Capacity Management operating on its own just adds no value to the enterprise and should be terminated.
Summary
Capacity Management is a fundamental of business. Whether the assets in question are office cubicles, trucks in a fleet, or virtual servers - they all fulfil a business need and represent an investment on behalf of the business to meet that need. Enterprises operating at a lower risk and cost will enjoy a competitive advantage in the market - and Capacity Management is the vehicle to that advantage. Use the five guiding principles outlined in this posts, to have the very best chance of success.
No comments:
Post a Comment