...and then an interesting thing happened over the last couple of weeks as I spoke at a couple industry conferences and started asking IT operations people how they measured themselves and their environments. [Hint: I may have put the cart before the horse...]
Usually I'd start out the discussion by asking people why they were considering (or deploying or using) Cloud Computing. Seems like a logical starting point since they were attending a session focused on Cloud. It's not unusual for someone to speak up and say they occasionally use "public Service X" because it's "really inexpensive" and they wanted to see if it's also applicable to their internal network as well. OK, fair starting point....so let's explore that a little bit more. How much does "Service X" really cost, all total?
Here's where the conversation begins to get interesting. Let's look at the things to consider:
- Cost of the Server instance (usually a VM) - typically measured in $/hr, but could be daily or monthly
- Cost of Storage for the Server instance - typically measured in $/GB
- Cost of OS or Application licenses, if not included in the Server instance pricing - typically measure $/CPU or $/RAM (note: some public services allow you to bring existing licenses, others do not. Some s/w companies allow this, others do not.)
- How much bandwidth does your application use? Both inbound and outbound?
- How much I/O does your application use for accessing storage, or how many I/O transactions does your application use over some period of time?
- How frequently does the amount of Storage (GB) change?
- How many I/O transactions does your application use with an off-box database or queuing service?
- Does your application require load-balancing services? If so, how many GB inbound and outbound?
- Do you expect to architect your Cloud application to have components that are geographically separated for load-balancing or redundancy? How much traffic (in GB or I/O transactions) will cross those geographic boundaries?
- Do you expect to monitor the application remotely or have the public service do that for you?
- Do you expect to monitor the infrastructure remotely or have the public service do that for you?
- Do you store your application data in tiers, using variable cost storage media?
- Are you required to store archived data in any specific (known, verifiable, auditible location) for regulatory or compliance reasons?
I highly these points not because I'm trying to be critical of IT organizations, but rather to highlight a few key points to them:
- The second set of questions all have variable answers that may or may not have linear scaling (or costs) like #VMs and #GBs did. Not knowing those answers, or not being able to make reasonable estimates, can result in unexpected costs when you're paying for Cloud services on a per-usage model (eg. OpEx, variable usage, "by the drink").
- If you're considering a Private Cloud deployment and looking at "shared resources" models for servers or storage or network, you're going to need some estimate of those numbers to do efficient capacity planning. [NOTE: Some estimates of various application workloads are modeled in Cisco Validated Designs, by each environment will vary and you should measure your environment or consult the websites of your application vendors]
- If you're trying to do a cost or ROI comparison between an internal and external deployment of a new IT service, how can you compare them without being able to align costs in a similar way?
It might be perfectly fine to not know any of those answers because your IT organization is attempting to do something new (or diferent), deploying a service outside your Data Center and it needs the freedom to success or fail just based on speed - time from great idea to great implementation. This is one of the great benefits of Cloud Computing - the cost to fail should be significantly less.
Bringing this back to the opening idea about new ways to measure your Cloud Computing strategy, it's important to remember that the true value of Cloud Computing will always be about speed. As a byproduct it may also be about cost-reductions at various stages of the project, but it requires you to understand how to measure your current Data Center against the options available outside your Data Center. Speed gives a company flexibility. That flexibility might be used to reduce risk, or it might be focused on creating new competitive advantages for the business. And depending on the skill set of your IT organization, those advantages might be delivered internally or augmented with external services. Either way, it helps to see the complete picture and be prepared for whatever decisions you make about new IT service delivery.
No comments:
Post a Comment