Tuesday, 16 July 2013
Accelerate Innovation - by increasing efficiency
Saturday, 27 April 2013
Capacity Planning the #Devops way
The notion of #Devops serves to accelerate time to market through greater cohesion in the release management life cycle.
So called 'service virtualisation', such as offerings from IBM and CA LISA, enables modular testing practise by learning typical behaviour patterns of defined systems. The effect is a more tightly focused testing process that reduces the dependency on external (inert) services.
Release Automation, such as in the newly acquired Nolio solution, allows the testing process to be further streamlined by providing cohesion through the multistage process. The benefits are most highly felt where complex dependencies and configurations add magnitude to setup and teardown for QA.
Agile methods need agile release management processes, and this is the whole point of #Devops. However the risks in this agile thinking come in end- to-end performance.
The missing link here is provided by prerelease Capacity Planning (such as provided by CA Performance Optimizer) , a virtual integration lab that brings together the performance and capacity footprints of each component in the production service. And while some of those components may be changing and therefore measured through the release management process, others are not - and are measured in production. Creating and assimilating component performance models allows the impact of each sprint to show on IT operations.
Capacity Planning is a true #Devops process. Only by adapting the capacity plan to take into account the step changes due to software release, can the risks of poor scalability and damaging impact be accurately guarded against.
Monday, 25 March 2013
Take a Capacity Healthcheck
But what about our IT enterprise? It's not uncommon - even in 2013 - to come across siloed thinking that stifles the health of the organisation. Purchasing decisions are made within the silos, leading to distorted allocations of capacity based on political, rather than engineering needs. Further - due to the manifestation of these silos, entropy increases as financial accountability struggles to permeate the organisation. Provisioning decisions are made on a risk-averse basis without insight into how business demands translate into capacity requirements.
And now it's time to change.
The financial crisis has caused a significant change in emphasis in most major corporations. IDC estimate that over 50% of major corporations are actively planning investments in better capacity management functions. Cloud-sourcing is on the increase to assist with the deferment of cost, and a refocus of investment on the core business. Capacity has become a commodity, and in an open economy, is becoming subject to the same commercial forces that balance cost and quality in any marketplace.
But how can enterprises leverage their purchasing power, when they don't know how much commodity capacity they really need? How can they right-size investments and defer costs without incurring risk to their top-line revenues?
Actually, this is a question that enterprises have addressed in many of their other lines of operation. Full financial accountability has ensured the right-sizing of many other enterprise assets; whether that is employees, hot-desks, freight containers or manufacturing capacity. Successful companies have figured out that costs must be aligned proportionately with revenues. The only thing that makes IT different is the complexity, and the lack of insight.
So where to start in right-sizing IT?
The answer is perhaps startlingly clear - you should begin where you always begin, with your requirements in mind: measure capacity consumption across all silos. The trick then is to bring in a method of normalization, a model library that provides weighting factors according to the make and configuration of your estate. This same method can then be applied to plan a migration, transposing configurations easily to determine the optimum sizing on alternative real-estate.
Thursday, 24 January 2013
Consumer/Provider : the twin forces in Capacity Management
Monday, 14 January 2013
To Transform IT - Revisit The Basics
Nice idea. But does this happen widely in the field? Evidence indicates that the transformation to the cloud model in the majority of organisations has hit a glass ceiling. With the existence of service catalogues, virtual adaptable infrastructures, and increasingly automated processes - IT organisations have put in place the basic ingredients that enable some of the tasks associated with cloud service provision. However, the vision of agile IT-on-demand has been held back by slow adoption of a business-integrated view more aligned with the balance sheet. IT resources like many business resources come at a cost. Not just a cost to purchase, but a cost to provision, a cost to operate, a cost to maintain and a cost to support. Factoring cost of ownership into resource provisioning requests, and aligning investment appropriately according to demand, are the two pre-requisites for the agile business.
These pre-requisites translate themselves into two management capabilities that are widely missing in IT operations today. Adding these capabilities to IT management functions will not only provide insight and control over efficient use of IT resources, but will also provide consumer-friendly insight to support optimal alignment of resources.

Secondly, by assessing usage patterns quickly, dynamically and providing short and mid-term trends to the consumer. The aim here is to ensure the right amount of headroom is maintained in the environment. A service-aligned view of capacity allocated is essential, such that the service headroom can be correctly calculated according to the weakest link in the chain. The insight that's needed is to gauge whether the service headroom will be sufficient to meet demands, according to trends, forecasts and other business analytics. Regression and correlation between workload and resource usage is another function classically described in Capacity Management.
So - what are we saying here? That Capacity Management is the missing link between cloud operations and an agile enterprise? No - not quite. Capacity Management as it is currently executed and understood is not fit for this purpose. However, connecting Capacity Management with both the demand cycle - notably from a service not an infrastructure point of view, and also with Financial Management has the ability to disrupt the enterprise cloud, and transform it to become a true partner to the agile business.
Monday, 19 November 2012
Leadership .. and the rush to the cloud
The reason I think this is happening, is that the fundamentals of IT service delivery have't gone through a revolution in the last 5 years. Sure, there have been leaps forward - notably in the smartphone and tablet markets, which have radically influenced accessibility and demand for services. Over the last 5 years, we have also seen incremental advances in IT capacity, in networks (notably end-user bandwidth which has been driven by the increasing demand), and in compute and storage terms. But the fundamentals of IT service delivery haven't changed. If you had implemented ITIL 5 years ago, you would still have the same frame of reference today and it would serve you well.
The difference is perception, and the advances of running IT like a business. Yet, this was one of the main strategies before the cloud came along. The reality is that business caught up with IT, and debunked the myths of risk-averse culture that became prevalent in many large enterprises. The business started to demand quality of service, and began to put a focus on costs. Just like an enterprise would manage costs in any other part of it's business, IT soon found that they were under similar cost pressures - and these became accelerated as the global downturn impacted profit margins.
What's really interesting though is the way that certain business models have begun to prosper in this new dynamic. Those are models that allow businesses to move away from large sunk capital investments, and towards a flexible model that allows them to account for their costs as a percentage of their revenue stream. There clearly is a great deal to be gained in accounting transparency here, but there's more - these flexible arrangements allow businessses to scale their cost base according to their overriding dynamic. On the face of it, it's a low-risk engagement for the customer.
But here comes the rub. As any risk analyst will tell you, it's the weakest link in the chain that tells you where your true risk really lies; and of course there are a number of risks associated with moving to this flexible arrangement that could scupper the whole deal. For several years, the security risks of losing personal data were often quoted as a show-stopper. More reputable companies offering their services have mitigated those risks -- for now at least. There are a certain number of regulatory factors to take into account, not least the actual jurisdiction of the data stores. Balancing these competing risk factors is the business of IT leaders.
Wednesday, 17 October 2012
Strip Down Cloud: the basics of cloud provision
All this stuff about what the cloud really is - is really just guff. Take for example self-service. This business guy doesn't care about self-service. In fact, it would be perfect if somebody else could do it for him. He wants a way of managing the contract himself - but he doesn't want to administer the service himself. If his business volumes go up - he wants to adjust the contract, so he has the capacity to support his business. If the volumes go down, likewise - the alignment with his business needs is what's important to him.
Take virtualization technology. This is a means to an end, it provides the rapid provisioning that this business guy really wants. But he doesn't care about virtualization. He cares about rapid response. If he orders more minutes on his phone contract; he wants the minutes instantly (although he might be satisfied to wait until the end of the monthly billing period). The same thing is true with his IT cloud. He wants to adjust his contract - and then wants to see rapid implementation of the changes. But it could be a horde of magic goblins for all he cares.
The only things this guy cares about are the quality of service he is receiving, the cost of the service, and the ability to flexibly manage this service. Just like his phone contract -- if the quality is no good, he will cancel it and move to a provider with better performance. If the cost is too high, he will move to a more competitive provider. And the flexibility in the contract will allow him to do that (although phone contracts often have a lock-in term; but of course if the businessman was designing the terms, then it wouldn't).
So what are the essentials of the cloud, from a technologists point of view?
- the ability to measure and manage a quality of service. A provider who values customer service (and many would argue that customer service is the cornerstone of a successful selling organisation) will proactively manage service levels and ensure that the cloud customer is getting the service levels that they need - and that they are contracted to. For this, we would recommend not only some level of service assurance monitoring but also some risk avoidance through predictive analytics; typically found in a capacity planning process. In addition, where service levels are set either contractually or through expectation, some form of management of performance against those service levels - business service insight - will be imperative.
- the ability to manage cost and capacity effectively. Given that an unsatisfied customer may change contracts freely or at will - and that the cloud marketplace is a competitive one, cost is the second important factor in a customer's investment decision. Cost in a cloud environment is borne mainly through infrastructure operations, and ties together elements like facilities, management, power/cooling, and capital costs. Uptime institute published a very good paper on this recently (click here). Equally though the price that is charged to the customer must either equate to the cost (in an internal private cloud charged as a cost center) or exceed the cost (as a profit center in a business) and be derived from the cost of the allocated capacity.
- the ability for a customer to flexibly manage their contract. There must be an easy way for the customer to change their service. Increasingly tech-savvy customers demand portals by which they can manage their own service levels. A self-service portal is often the lowest cost way of providing this capability. However the cloud does not mandate a portal, in fact a call-center can provide the same facility. Most of the time I manage my phone contract is through a call center; and the benefit is that my provider gets the chance to sell me something new every time..!
- the ability to rapidly deploy any changes to a customer's service. The cheapest and quickest way of doing this is likely through usage of virtualization technology, where existing and unused capacity can be allocated to a customer. New technologies are emerging here all the time, around storage and network capacity as well as compute capacity. Hybrid cloud providers are using third-party capacity to extend their capability quickly and leverage existing data center space.