Sunday, November 13, 2011

VM Mobility and Electric Cars - New Roads for New Cars?

For the past couple years, as hybrid and electric car technologies evolved, I've been evaluating when was the right time for me to make the switch. For 90% of my driving, I travel within 40-50 miles in a day. I could easily get to when I needed without any concerns about recharging options (or challenges). But occasionally, I need to go a few hundred miles in a day - family vacations, travel to a business meeting, etc. - so I need to be concerned about refueling challenges. Sometimes costs come into play, but in most cases it works out to be at least break-even for the new vehicles over a 5-7 year period.

But one thing I never have to think about with the new vehicles is that they will require new roads. New fueling stations, or access to a power plug - yes. But never the wholesale replacement, or even overlay, of the millions of miles of state and national highways. Can you image if that's what Toyota, Ford, GM and others proposed in other to be able to leverage the value of these new mobility technologies?

But isn't that a decent analogy for what's happening in the networking and virtualization industries these days? If you truly have a need for VM mobility, across longer distance, you're most likely going to need a significant change to the underlying infrastructure to help you realize the value of that VM mobility.

Are there technology options out there to begin to make this happen? Yes.
Will they create the potential for massive change of that infrastructure? Yes.
Is that really what was intended to bring out the value of virtualization technology? Hmmm.....

NOTE: For more technical-level details, the guys from Packet Pushers podcast do an excellent job of discussing the bigger picture of what this means and the challenges (Show 71, 68, 66)

The #OccupyTech Movement of 2011

If you've been following the tech headlines so far this year, you may frequently find yourself thinking that it's almost impossible to keep up with all the changes happening. Sure, it's normal for the tech industry to go through perpetual change, but the pace of the last 12-18 months seem to have shifted into Millenium Falcon level hyperdrive.

OpenStack, OpenFlow, OpenCompute, Software Defined Networks (SDN), Platform as a Service (PaaS), Hybrid Cloud, Big Data, Hadoop, NoSQL, and Network Fabric architectures in the Data Center. It's quite a list and it doesn't include new hardware economics like micro-servers, Flash/SSD storage or merchant silicon for network devices.

And not to be forgotten, there are the tablet wars, app-store wars and the smartphone revolution for the humans in this equation.

Now here's where all this change gets interesting, where you have to ask yourself this question - "How much does this effect me or my business"? Sometimes it seems like the Top 99% of tech headlines are being driven by 1% of either the vendors, start-ups or media that need to keep the discussions front and center. Welcome to the #OccupyTech movement :)

"Uptime" - The New Metric for IT Competitiveness

Having spent the last couple years dealing with multiple aspects of Cloud Computing - definitions, vendor implementations, business adoption vs. pushback, available public services, etc. - I'm now starting to conclude that it all really comes down to one metric. "Uptime".

Before anyone thinks I'm talking about the 99.xxxx% metric that has been getting thrown around for decades, think again. That "uptime", the meausre of how often the system is available (or unavailable) is no longer relevant. That metric is now measured like this:
  • Start with some assumed downtime that your business can afford vs. an assumed loss of revenue (let's say 99.99%).
  • Bring in some architects to figure out how much hardware and software is required to meet that goal.
  • Build the system. Run the system. Periodically update/maintain the system.
  • Annually, measure if you achieved that 99.xxx% level of "uptime".
Now, throw all that away the first time something unexpected happens to your system and it's publicly off-line for 4-8hrs and it becomes a topic of conversation on Twitter. Or one of your disgruntled employees mentions on a blog how they couldn't get their job done because your system was done.

"uptime" (lower-case "u") is now expected to be 100%. Engineering might scope it for something else, but non-engineers done know/care about your scoping. They just know they can't work. They don't care if there was a vendor bug, or traffic demand surged, or the ISP had a routing issue. Blah, blah, technobabble...

So what is this new "Uptime" (upper-case "U") I mentioned earlier? This is the "How long does it take to have the new functionality 'Up'?" measurement. Translated a different way:

Sunday, November 6, 2011

Cloud Computing Predictions for 2012

This past week I was asked by Jeremy Geelan (http://jeremygeelan.sys-con.com/) to make some 2012 Cloud Computing predictions ahead of the Cloud Expo conference happening this week in Silicon Valley. These were a short list because they were being consolidated along with numerous other industry viewpoints. The complete list can be found here


I plan to write a more complete list of predictions later in the year, but he's a preview for now (2011 predictions here):


1. PaaS becomes the overhyped buzzword for 2012, still needing another year to shake out competition in the market before companies begin making decisions on which platforms to adopt. Whoever can combine Java and .NET into an integrated PaaS platform, with options for modern web languages, will take a significant lead with developers.