Saturday, December 10, 2011

Looking forward to 2012

[UBER-DISCLOSURE: These thoughts or predictions offer no insight or insider-knowledge from my current or future employer. This discussion is strictly from my my own thoughts, based on publicly available information.]

A few weeks back I was asked to provide some Cloud Computing predictions for 2012. Those were pulled together quickly and didn't really provide any context or look at the trends surrounding them. It's getting to be that time of year again; time to look forward at the areas that could be very interesting in 2012. I made some predictions last year, but I thought it might be more interesting to focus on what will be interesting in 2012 (at least from my point-of-view).

Topic #1: Cloud Economics - People like Simon Wardley (@swardley), Bernard Golden (@bernardgolden) and Joe Weinman (@joeweinman) been doing an excellent job of providing macro-level economics overviews of Cloud Computing. They have primarily looked at the trends being driven by "outsourcing to the cloud", costs of bandwidth or network, commodity hardware vs. vendor hardware, opensource software vs. vendor software and the reduction of people costs due to automation within public clouds.  The next stage of this is to move it from conference talks and academic-level papers to tools that managers can use to determine when it makes sense to leverage Cloud for tactical vs. strategic opportunities. The next stage becomes less focused on cost-savings and more on how these new models can impact top-line business growth and industry shifts beyond the technology sector.

Topic #2: Democratizing Big Data - So far, most of the talk around "Big Data" has been in the areas of start-up funding, OEM partnerships, "Data Scientists" and infrastructure redesign. Those all have an impact on how the Big Data systems will be built, but I don't believe they focus on the area that will truly make this a difference maker for business. To harness the real value of Big Data, the tools to question, interpret and analyze the information need to be made available to a much broader group that just "business analysts" and Data Scientists. The winning companies around Big Data will be the ones that allow the information to be dissected by managers and individual contributors as simply as it is to do a Google search or a Wikipedia lookup. Allow the curiosity of the person closest to a businesses customers or communities to ask the question that may unlock the nuance to truly excel in a new market or with a new product.

Sunday, November 13, 2011

VM Mobility and Electric Cars - New Roads for New Cars?

For the past couple years, as hybrid and electric car technologies evolved, I've been evaluating when was the right time for me to make the switch. For 90% of my driving, I travel within 40-50 miles in a day. I could easily get to when I needed without any concerns about recharging options (or challenges). But occasionally, I need to go a few hundred miles in a day - family vacations, travel to a business meeting, etc. - so I need to be concerned about refueling challenges. Sometimes costs come into play, but in most cases it works out to be at least break-even for the new vehicles over a 5-7 year period.

But one thing I never have to think about with the new vehicles is that they will require new roads. New fueling stations, or access to a power plug - yes. But never the wholesale replacement, or even overlay, of the millions of miles of state and national highways. Can you image if that's what Toyota, Ford, GM and others proposed in other to be able to leverage the value of these new mobility technologies?

But isn't that a decent analogy for what's happening in the networking and virtualization industries these days? If you truly have a need for VM mobility, across longer distance, you're most likely going to need a significant change to the underlying infrastructure to help you realize the value of that VM mobility.

Are there technology options out there to begin to make this happen? Yes.
Will they create the potential for massive change of that infrastructure? Yes.
Is that really what was intended to bring out the value of virtualization technology? Hmmm.....

NOTE: For more technical-level details, the guys from Packet Pushers podcast do an excellent job of discussing the bigger picture of what this means and the challenges (Show 71, 68, 66)

The #OccupyTech Movement of 2011

If you've been following the tech headlines so far this year, you may frequently find yourself thinking that it's almost impossible to keep up with all the changes happening. Sure, it's normal for the tech industry to go through perpetual change, but the pace of the last 12-18 months seem to have shifted into Millenium Falcon level hyperdrive.

OpenStack, OpenFlow, OpenCompute, Software Defined Networks (SDN), Platform as a Service (PaaS), Hybrid Cloud, Big Data, Hadoop, NoSQL, and Network Fabric architectures in the Data Center. It's quite a list and it doesn't include new hardware economics like micro-servers, Flash/SSD storage or merchant silicon for network devices.

And not to be forgotten, there are the tablet wars, app-store wars and the smartphone revolution for the humans in this equation.

Now here's where all this change gets interesting, where you have to ask yourself this question - "How much does this effect me or my business"? Sometimes it seems like the Top 99% of tech headlines are being driven by 1% of either the vendors, start-ups or media that need to keep the discussions front and center. Welcome to the #OccupyTech movement :)

"Uptime" - The New Metric for IT Competitiveness

Having spent the last couple years dealing with multiple aspects of Cloud Computing - definitions, vendor implementations, business adoption vs. pushback, available public services, etc. - I'm now starting to conclude that it all really comes down to one metric. "Uptime".

Before anyone thinks I'm talking about the 99.xxxx% metric that has been getting thrown around for decades, think again. That "uptime", the meausre of how often the system is available (or unavailable) is no longer relevant. That metric is now measured like this:
  • Start with some assumed downtime that your business can afford vs. an assumed loss of revenue (let's say 99.99%).
  • Bring in some architects to figure out how much hardware and software is required to meet that goal.
  • Build the system. Run the system. Periodically update/maintain the system.
  • Annually, measure if you achieved that level of "uptime".
Now, throw all that away the first time something unexpected happens to your system and it's publicly off-line for 4-8hrs and it becomes a topic of conversation on Twitter. Or one of your disgruntled employees mentions on a blog how they couldn't get their job done because your system was done.

"uptime" (lower-case "u") is now expected to be 100%. Engineering might scope it for something else, but non-engineers done know/care about your scoping. They just know they can't work. They don't care if there was a vendor bug, or traffic demand surged, or the ISP had a routing issue. Blah, blah, technobabble...

So what is this new "Uptime" (upper-case "U") I mentioned earlier? This is the "How long does it take to have the new functionality 'Up'?" measurement. Translated a different way:

Sunday, November 6, 2011

Cloud Computing Predictions for 2012

This past week I was asked by Jeremy Geelan ( to make some 2012 Cloud Computing predictions ahead of the Cloud Expo conference happening this week in Silicon Valley. These were a short list because they were being consolidated along with numerous other industry viewpoints. The complete list can be found here. 

I plan to write a more complete list of predictions later in the year, but he's a preview for now (2011 predictions here):

1. PaaS becomes the overhyped buzzword for 2012, still needing another year to shake out competition in the market before companies begin making decisions on which platforms to adopt. Whoever can combine Java and .NET into an integrated PaaS platform, with options for modern web languages, will take a significant lead with developers.

Sunday, October 23, 2011

Cloud Computing Drivers - Where is the Focus?

While I was writing this post about the linkage between vendor offerings and IT offerings, I started noodling on this circular challenge:

Public Cloud Driver - IT People (dissatisfaction with IT services)
Public Cloud Enemy - AWS Outages

Private Cloud Driver - IT Competitiveness (business demands for better services)
Private Cloud Enemy - IT People

Lots of time and effort gets spent trying to debate the merits or failings of various technologies. Are technologists really spending their time on the right things? Maybe this is why Gartner has recently flip-flopped on their Public vs. Private Cloud deployment guidance. But which group within the business will be leading that leap?

Looking in the Mirror: IT Demands vs. Business Demands

Sometimes a few different thought streams randomly cross and you start connecting the dots about problems that you've been viewing from only certain angles in the past. That happened to me last week.
  1. Thought Stream #1 had me making a list of the most common objectives I receive from IT organizations as we discuss products, technologies, etc. They aren't unusual, and they'd probably been reasonably consistent for the past 5-6 yrs (or longer). 
  2. Thought Stream #2 occurred as I watched the tweet stream from Chuck Hollis (@chuckhollis, blog) last week. He was commenting on the discussions that were occurring at an EMC IT Leadership Summit, focused on changes and trends in IT. Some tweet streams disappear over time, I copied a few of the interesting ones below:

Saturday, September 3, 2011

Thoughts from VMworld 2011

Prior to VMworld, I jotted down some thoughts on areas that I wanted to explore during the week. As I stated then, it feels like we're at a point where there is going to be significant change in many segments of the IT industry.

It was quite a busy week for me, as it was for most people:
  • Recorded The Cloudcast (.NET) - Live - Eps.18 - vCloud and vCloud Security
  • Part of a Silicon Angle "Cloud Realities" panel with Jay Fry (@jayfry3) and Matthew Lodge (@mathewlodge)
  • Recorded the Daily Blogger Techminute for Cisco
  • Recorded an Intel "Conversations in the Cloud" podcast with (@techallyson) - airing date TBD
  • Presented "Cisco UCS and Cloud Vision" whiteboard in Intel booth.
I didn't get to explore all of the areas in my original list. But I did get to walk the exhibitor floor, attend the keynotes and have many hallway conversations with experts in most of those areas. So here's what I gleaned from the week:

Sunday, August 28, 2011

Are there No Rules anymore, or New Rules everywhere?

As I prepared to fly to Las Vegas for VMworld, I started thinking about which technologies I wanted to learn about, which start-ups I wanted to investigate and which strategic angles I needed to dig into more.

As I started thinking about various technology areas - networking, storage, cloud management, application development or usage (PaaS and SaaS) and Big Data  - it dawned on me that every one of those areas was under intense pressure to significantly change where it has been for the last 5-10 years. Technologies are always going through cycles of updates, but I can't remember a time when so many areas were going through potentially radical change at the same time.

Networking: The three biggest questions in networking today are focused on the server-access layer of the Data Center.
  1. Do new applications (web, big data, etc.) mandate a reduction of network layers and/or a simplicity of deployment/operations? Do we need new ways to partition networks?
  2. Where do custom ASICs belong in the Data Center vs. "merchant silicon" from Broadcom or Fulcom Technologies?
  3. Where do L4-7 services (Load-Balancing, Firewall, IDS/IPS, DLP) belong in these new architectures, and how should they be deployed (application-level, virtual appliances, physical appliances or integrated services in switches)?
  4. Are networks ready to be more automated? How broad or complex a "container" should be automated?

Unlocking the Microsoft Deathgrip on the Desktop?

At VMworld, I had lunch with an Enterprise CIO who told me that they had just written a high 8-figure check to Microsoft for a new ELA. While part of this included Server-side licensing for business applications and databases, the more interesting piece to me was the effects on desktop usage. The statement that really started the interesting part of our conversation was, "We have quite a few Mac users these days, but unfortunately they cost me just as much (in licensing) as any PC user I have."

Huh? How could that be?

Explanation: Any Mac user that uses an Outlook / Exchange account, accesses an IIS webserver, accesses an applications with a SQL database, authorizes via Microsoft AS or uses Microsoft Office (PC or Mac version) is taking up the same level of ELA licenses that a full-blown Windows PC user consumes.

We then talked about about the variety of SaaS or Microsoft-alternative products available in the marketplace. While some of them were being used, the bulk of them were not because of all the user-level retraining or interoperability issues. Getting out from those expensive handcuffs was going to be extremely difficult for them.

Sunday, August 14, 2011

There isn't a "Cloud Layer" ... Oh wait, maybe there is?

We're all familiar with the 9-layer OSI model - 7 layers of technology + Politics and Religions. This is the stack that makes up the Internet. Cloud Computing uses the same underlying technologies, but is often discussed in the context of the IaaS, PaaS and SaaS stacks.

Nowhere in the OSI stack do you see a "cloud layer". As my colleague and noted cloud computing expert James Urquhart likes to say (paraphrased), "There is no cloud layer (in a technology sense), it's all about new operational models."

For a while I accepted that statement without giving it much thought or rebuttal. But some activities over the last couple months have gotten me thinking that maybe this isn't actually true. Let me try and explain.

One could argue that the OSI model (the technical stuff, Layers 1-7) defines all the layers that make up the Internet. But as we all know, the Internet evolved in ways that the DARPA's founding fathers never intended or envisioned back in the 1960s. Things like NATs, or VPNs, or L3-over-L2 technologies, 4-to-6 tunnels, or other overlays like MPLS. Those technologies use the layers of the OSI model, but in reality they add new "layers" to the Internet stack to deal with either legacy designs or new usage models. Sometimes they solved problems and other times they created new problems and added more additional layers (eg. NAT traversal technologies for multi-media)

All of those "additional layers" of the Internet were focused on new ways to route packets, address or hide networks, manage legacy network transitions. Very network-centric issues.

In cloud computing, the more central issues are focused on APIs, mobility of workloads (or VMs), obfuscating layers of complexity, making workloads "dynamic" in scale or availability, and transitioning from legacy architectures to new architectures. Some of these involve new operational models (DevOps, "built to fail", etc.) as James Urquhart pointed out. But within that context, we're already starting to see some new layers emerge to be able to actually make the technology more useable or provide greater levels of flexibility. Let's take a look at a few examples:

Saturday, July 30, 2011

The "21st Century Bits Factory"

Growing up in Detroit, I was always fascinated by the auto factories. Beyond the fact that I had tons of friends and relatives that worked there, I used to love to go on the tours to watch the massive machinery turn out these incredible pieces of automative art. But over time, the atmosphere in the factories changed as competition from European and Asian carmakers increased. Things become more automated, resources were brought in "just in time", parts were sourced from all over the world and in some cases actually came from competitors. Terms like "lean manufacturing" and "six sigma" began to flood into the vocabulary of anyone involved in manufacturing in the 70s, 80s or 90s.

For people working in IT, a similar transition is rapidly taking place. For some people, they may view the burst of new Data Center building announcements (here, here, here, here, here, and many more) as just a new phase in IT evolution. But I actually believe it's something bigger than simply the natural trends of Moore's Law.

I've been using the phrase "21st Century Bits Factory" for about six months now because I believe this new trend toward hyper-efficient Data Center facilities and operations is a similar tipping point to what we saw in the manufacturing industry decades ago. But instead of making cars or widgets, these giants factories are creating products, commerce and business value through 1s and 0s. The businesses they support are almost entirely driven by the value of this data, so the businesses are beginning to invest in their data centers with laser focus.
On a side note, I had the opportunity to visit a number of factories in China a couple of years ago. There were two elements that seemed to be very consistent in many of the factories: 

  • The factory had been given the goal of trying to reduce their physical footprint by 30-50% every 12-18 months. Not only did this mean they needed to reduce the number of steps to create the unit, but potentially find new ways to do it in a small space. This might means new tools, different ways to store inventory, etc. The new space would be used to take on new production lines for the business. [Image if IT organizations had that mandate. What new doors could it open for the business?]

  • The factory managers were constantly pointing out unique "best practices" that the factory workers had created themselves, in response to motivating factors (usually $$ bonuses) to improve the productivity of the factory. Sometimes these resulted in better products and sometimes it reduced costs or time to completion. [Image if IT organizations were measured by those metrics, instead of primarily on uptime? Could their experience drive better products? Could their experience enable new business opportunities or business models?}

  • So what does all of this mean to the IT industry?

    Sunday, July 17, 2011

    "The Cloud Concierge" - The New CIO - Creating IT as a Platform

    [Acknowledgement: The name "Cloud Concierge" came from Christian Reilly (@reillyusa), Chief Cloud Architect at Bechtel.]

    I submitted the following session abstract to a few Cloud Computing events in the fall. Since it's uncertain if a session will even get accepted by a particular show, I thought it would be a useful exercise to just go ahead and elaborate on the concept via the blog, as I believe the concept is an important next-step for any CIO looking to align their business needs with the portfolio of IT options available to them today and in the future. 
    Abstract: Faced with many cloud computing options (public, private, hybrid) and the threat of “shadow IT”, tomorrow’s CIO will need to evolve corporate IT into an operational model that is less about saying “no” and more about enabling the cross-cloud capabilities required for 21st century business. This session will discuss the technology and operational transitions needed by CIOs in every industry to accommodate the trade-offs between device proliferation at the edges and operational efficiencies for their 21st century bit factories. Attendees will have a blueprint to evolve IT from an organization into a platform for delivering better IT.

    Before I get into some of the specifics, I believe that it's valuable to identify some high-level concepts and trends that are driving technology today:
    • Public Cloud Computing
    • Mobile Computing - Computing “in your pocket”
    • Consumerization of Devices (prices, app-stores, usability)
    • Connected Applications (Web 2.0)
    • Big Data Analytics and Analysis
    • Move Applications to the Network (NAPs for Connectivity)
    • Move Data to the Computer (Big Data)
    [NOTE] So people don't need to read details on those concepts prior the explanation of the abstract, I moved them to the end.

    Sunday, June 5, 2011

    PaaS - The Ultimate Survival Contest for IT

    Even with all the confusion around Cloud Computing, there are a few things that are fairly well accepted by everyone involved in the IT industry:
    • Developers have led the early waves of Cloud Computing and they will continue to lead the next phases, especially as more open-source options are made available to them.
    • Corporate IT organizations are typically slow to change, as they are focused on stability and 3-7yr budget/depreciation cycles. 
    These two dynamics are creating a challenging environment within the IT industry. As more "in house" developers leverage public cloud resources, typically due to the pace of IT (operational) responsiveness, this is creating a "shadow IT" environment outside the corporate IT walls. On the flip-side, corporate IT is growing more concerned about external Cloud Computing resources [1][2][3] because of well-publicized outages and concerns about security and compliance.

    The latest wrinkle in all this development is the emergence of several PaaS projects and platforms which will give developers the option of running their applications in public or private environments. They also come with the promise of application portability between public and private environments. Whether or not this happens is still TBD and will probably take several years to work out the kinks (and levels of trust from developers). But it introduces an interesting crossroads for corporate IT organizations, which has experts predicting a variety of potential outcomes.

    Wednesday, May 25, 2011

    Who knew TLC was the best source of Cloud Computing knowledge?

    I'm not proud of this, but I'm the father of a child who has become a reality-show junkie. To be more specific, my child is a TLC reality-show junkie. Luckily it's not all the shows, some of which are quite disturbing, but enough to where a few of them have become part of our family's real-life activities.

    What do I mean? For her birthday, my daughter told us she wanted to "go to Hoboken, NJ - to meet Buddy the 'Cake Boss'" (see picture at right). So being as my wife and I want to help our kids experience the world, we're making a trip later this year to NYC, with a side trip over to Hoboken to experience Carlos Bakery and their fine pastries and desserts. Hopefully the lines aren't too long.

    So what in the world does this have to do with Cloud Computing, you may ask? Fair question. While setting the DVR for Cake Boss, I've been known to peruse the programming schedule for other TLC shows. Let's take a look at the TLC lineup and see how aligned they are with the Cloud Computing industry these days.

    Thursday, May 5, 2011

    Confused about Cloud Computing? You should be...

    You or may not believe this, but I think we've reached a point with Cloud Computing where the discussions centered around "definitions" are almost over and we'll quickly be moving into a stage of people/companies wanting to do stuff. I know what you're thinking...thank goodness we can finally stop having every presentation begin with a NIST definition, or a stack showing *aaS this and *aaS that.

    That's the good news.

    The bad news? The number of available Cloud Computing options in the market today is mind-blowing. Let's take a basic inventory (in no particular order):

    Before you dive into these lists, keep this quote from 'The Wire' in mind:  
    Cutty: "The game done changed."
    Slim Charles: "Game's the same, just got more fierce."

    Thursday, April 28, 2011

    A view into the "Clouds of Change" blog

    I created this word cloud using the service. I thought it would be interesting to review these periodically to see what topics are dominating the discussion on the blog. I'll post these from time to time to see how things change in this rapidly evolving segment of the technology industry.

    Saturday, April 23, 2011

    An Interesting Set of Coincidences

    I'm not a big believer in conspiracy theories, but it is sort of fun and ironic when certain coincidences all line up around the same time. For example:
    • My EC2 Instance - The First 1000 Days  (April 11, 2011)
    • Cloud Foundry makes "Clouds Portable" (April 12, 2011)
    • Some days you're the Pigeon and some days the Statue (April 14, 2011)
    • The "Cloudpocolyse" Begins (April 21, 2011)
    And of course there's this fictional prediction...

    Not sure there are any great lessons here, except to work hard, be humble and keep in mind that fundamentals (technology, design, etc.) matter.

    Wednesday, April 13, 2011

    101 Thoughts about the "Cloud Foundry" Announcement

    [This is still a work in progress - the blog post, not Cloud Foundry ]

    This week's Cloud Foundry announcement has the Cloud Computing community buzzing about the OpenPaaS framework driven by VMware. Considering the size of VMware, the competitors it's taking on (Google, Microsoft, IBM, Oracle,, Amazon AWS and RedHat), and the delivery model (open-source), it's no surprise that opinions and analysis are coming from every direction.

    Who knows if I'll make it all the way to 101, but my head's been spinning with all the angles, so here goes.... [NOTE: Some of these thoughts are specifically about Cloud Foundry, while others are about the idea of application portability & mobility, since frameworks tend to come and go over time.]

    1. Their Hypervisor is always under pricing pressure from free alternatives (Hyper-V, Xen, KVM), their middleware is now open-source and SP's probably aren't sure if they are vendor or competitor. Will be interesting to see what their revenue models looks like long-term.
    2. Cloud Foundry creates a PaaS model that isn't dependent on VMware VMs, but does attack their largest competitors, so an interesting "creative disruption" strategy. This always takes courage from leadership. Paul Maritz is uber-smart and been in the big boy games before, so you have to expect he has a grand plan for all of this.
    3. It's not clear if "for fee" Cloud Foundry from VMware is attached to Mozy's service. Mozy just raised prices. If they are connected, it will be interesting to see if the "for fee" service can remain competitive at scale...or if that's really part of their strategy? 
    4. What does Cloud Foundry mean for existing VM Admins? 
    5. How will the 1000s of VMware channel partners adapt to this new model? Do they have the skills to understand this and sell this? Moving from infrastructure to applications is not a simple context switch for most people.
    6. How will Cloud Providers (SPs hosting the apps) react to the portability capabilities? Will we see minimal friction (eg. number portability), or subtle changes to their services (ToS, billing, network/security infrastructure) that reduce that flexibility?
    7. Several people are already connecting the open-source dots of CloudFoundry and OpenStack, so what does this ultimately do to vCloud Director (at SP or between ENT & SP)?
    8. Does VMware have the deep pockets to support the developer community enough to make it dominant (like Google and Microsoft did before them)? Is VM revenue enough of a cash cow to sustain.

    Sunday, April 10, 2011

    Has Financial Services shown us the Future of Cloud Computing? - Part II

    In Part I of this series, I looked at some of the thinking and buyer behavior similarities between today's financial services and today's Cloud Computing trends. In Part II, I wanted to look at the potential similarities in the value-chain between these two markets.

    In the past, financial services was dominated by investment banks, insurance companies and large financial conglomerates. They had access to the largest amounts of capitals and access to the best knowledge of market trends. Investors could enable portfolios that created steady, diversified returns that would allow them to build for the future and avoid major catastrophes. Access to those portfolios was available through  a variety of sources, often local or regional professionals, that provided a variety of services.

    Over time, due to technology and competitive forces, the financial services giants consolidated and began offering more services directly to clients. They brought together retail banking, brokerage, investment banking and M&A. Differentiating between the largest companies became difficult for many investors. Seeing an opening to offer differentiated or unique services, many "alternative trading" platforms arose, include the rise of hedge funds. These services focused on automated trades, often taking both sides of the market and using extreme levels of leverage to create ROI. By being highly leveraged, often times ignoring (or going against) established guidelines, they began to establish themselves as the area where the most value was derived from the financial services industry. They left the established players to focus on lower-risk customers, often focused on traditional principals and goals.

    Saturday, April 9, 2011

    Has Financial Services shown us the Future of Cloud Computing? - Part I

    [This is the first of a two-part blog looking at the parallels of Cloud Computing and Financial Services - Part I looks at trends in the use and consumption of financial services and assets.]

    "The value of your internal assets just haven't been unlocked because you're not using them the right way."

    "The easiest way to deal with risk is to give it away to some other business (and they'll give it away as well). Package it up and claim that it has no flaws."

    "Massive leverage on a shared infrastructure produces incredible ROI."

    "You can trust us with your assets, we don't have internal systems that can better leverage that information to position ourselves on the other side of this business."

    [muffled in the background, amongst all the noise...., "Create a plan that utilizes tools you understand and can adapt to ups and downs in the market."]

    Cloud Definitions - More Useless than Statistics?

    Given that we all use various elements of Cloud Computing every day, I know they can't be classified as Lies, damned lies and statistics...

    ...but several years into this latest phase of computing, it seems we can't go more than a few weeks without someone trying to create a new definition for Cloud Computing (here, here and here).  Everybody spinning a new cloud definition to help establish themselves as the leader in this transition, according to their own model.

    Do I blame companies, consultants, standards-bodies and media for trying to establish themselves as a thought leader as things change? No, that's normal behavior. But what really concerns me is the potential damage this does to overall innovation across the industry if people spend more time worrying about definitions than they do creativity. We discussed this on The Cloudcast with Christian Reilly a few weeks ago, in the context of how he helped his company adopt a new services-centric model within their business.

    Christian made a subtle, but insightful, comment that we didn't put very many definitions around "the Internet" as it was evolving throughout the 1990s. This allowed companies and developers to be creative in how they created technology and evolved it for their businesses.

    Am I overthinking this or are the definers getting in the way of the thinkers and doers?

    Monday, April 4, 2011

    The Challenges of Market Transitions

    [Disclosure: My current employer (Cisco) is a business/technology partner to both VMware and EMC]

    Earlier today VMware announced that they had acquired the assets, people and operations of online storage service Mozy from EMC. Almost immediately, the Twitterverse was calling out VMware for abandoning their current business model, primarily acting as a vendor, selling to Enterprise/Commercial businesses and Service Providers. Mozy currently operators as a Cloud Storage provider (to both consumers and small-business), so questions arose about whether this now put VMware in competition with their existing (or future) Service Provider customers.

    Considering how rapidly the IT landscape is changing, I had to step back and think about whether an action like this completely forces VMware to change their business model or if there were alternative ways to think about it. My immediate thoughts are recorded on the whiteboard picture above. [NOTE: I have no inside-information about what the VMware strategy might be, these are purely guesses. Don't bet your 401(k) on any of my scribbles.]

    Saturday, March 26, 2011

    Has your Data Center been measured for Cloud yet?

    A few weeks back I wrote a post asking, How you planned to measure your Cloud Computing strategy? In that post I was trying to help people understand that while cost-savings or cost-comparisons (between Cloud services) are somewhat helpful in making an ROI business-case, it may not be looking at the right way to truly measure the value of Cloud Computing, which is really about speed, agility and innovation for the business.

    ...and then an interesting thing happened over the last couple of weeks as I spoke at a couple industry conferences and started asking IT operations people how they measured themselves and their environments. [Hint: I may have put the cart before the horse...]

    Usually I'd start out the discussion by asking people why they were considering (or deploying or using) Cloud Computing. Seems like a logical starting point since they were attending a session focused on Cloud. It's not unusual for someone to speak up and say they occasionally use "public Service X" because it's "really inexpensive" and they wanted to see if it's also applicable to their internal network as well. OK, fair starting let's explore that a little bit more. How much does "Service X" really cost, all total?

    Here's where the conversation begins to get interesting. Let's look at the things to consider:

    Friday, January 21, 2011

    What might Google 3.0 look like?

    Google announced yesterday that they would be making some changes to their executive team and I covered one angle on that here. Even more interesting than the pace of change over the last 10 years is what Google 3.0 might look like. I wrote my MBA Strategy thesis on a future Google business model, but that was two years ago and it seems woefully out of date now. Looking back on it now, it was very focused on moving into new ways to collect information (mobile, social, etc.), enhancing the core advertising business.

    If I look at where Google is now, and why they might been considering some leadership changes, a few things jump out at me.
    1. Back in 2000, the way that Google changed search on the Internet was incredible. It was as if they opened a completely new (and large and fast) door to the world of information. But over time, we've become accustomed to that information always being available to us. Information is now expected. It's still a cash cow, but it's no longer a "wow" factor. [Bring Sexy Back]
    2. The Internet is exploding into a social medium. And while there are tons of engineers figuring out new algorithms for unlocking information from social interactions, "social" is an unpredictable area. Who knew that we'd love to send 140 characters, or tell the world that we're checking into the gas station, or buy virtual farmland. Predicting the social internet is like predicting the next fashion craze for 13-14yr old teenage girls. It doesn't always make sense, it's not predictable. And Google is all about data and things making sense. They believe they can predict your next search and potentially the future. There is a disconnect here. [OK Fine, You're a Valley Girl]
    3. After the Internet bubble burst, Google became the "Internet IPO market" for many start-ups. But other than YouTube and Android, they generally didn't treat those acquisition very well. So not only did they develop a reputation as a less desirable end-state for start-ups, but they increased the competitiveness of other funding sources (secondary markets, VC funding, etc..). [Another One Bites the Dust]
    4. As we've seen with examples like NetFlix's use of Amazon AWS, the cost to get large-scale new services out the door is significantly less than it was in the past. This means that start-ups don't necessarily need Google's scale to reach a mass audience, so they aren't as willing to sell out to the Googleplex. [Start Me Up]
    So how might they address some of these issues and find new areas of business and revenues? Here's a few thoughts:

    Sunday, January 16, 2011

    Apps Stores coming to your Enterprise - "iTunes IT" - Part II

    Following up on my initial post, it seems that one of the areas that resonated with readers was the idea of "app store" like functionality coming to their businesses, internally. In doing some more research, it appears that this concept is already starting to gain some traction from very large players in the market, such as - Restricted Enterprise App Store (Apple patent via FastCompany), VMware Project Horizon, and Citrix OpenAccess

    Now by no means is this is a new idea, as other have written about it and companies like Google and have been delivering this concept via SaaS offerings for a couple years. But businesses are often slow to adopt change, so seeing major vendors adopting a model that is closer to Enterprises (and mid-market), giving them a greater level of control of certain elements, is a step in the right direction to affect this type of change. It allows end-users and lines-of-business to leverage new services (internal or external), while continuing to allow a level of control/trust/security for the IT organization.

    This brings up some interesting questions for makers of Enterprise software.
    • Who is your customer now and over the next few years if this trend gains traction? 
    • Do you know how to target the end-users and ISVs that will be the consumers and suppliers of these Enterprise App Stores?
    • Are you learning anything from consumer marketing or social media to better target or influence you future customers?
    • Are you ready for communities of user-feedback to potentially have greater influence over future sales than licensing lock-in?
    • Are you continuing to shift your user-experience to work seamlessly with the new types of devices that expect to gain productivity via Enterprise App Stores?

    Sunday, January 9, 2011

    Do'ers vs. Did'ers

    Following up on my post about Cloud's impact on tech jobs, I had some people ask me what areas might be good for learning new skills or new technologies. Despite my crude attempt at predictions, I don't fancy myself as a prognosticator and try to avoid forecasting the future. Instead, I've started giving a slightly different brand of advice. It goes something like this...

    "Start following the do'ers instead of the did'ers"

    It's a fairly simple concept...

    Saturday, January 8, 2011

    Is iTunes the next generation of the IT Department?

    In business school we spent a good bit of time talking about corporate strategy and looking at where disruptive competition might come from. The simple thing to do is look at existing competitors or a start-up within the industry. But the real challenge is to consider that the next big competition might come from a completely different industry. A classic example is to look at the airline industry and consider how collaborations tools (Skype, WebEx, Cisco Telepresence) have allowed businesses to consider alternative ways to hold meetings and client interactions.

    A recent post by Chuck Hollis got me thinking about competition for IT departments. My response was partially inspired by a tweet from Twitter/Square founder @jack, where he mentioned that he now does about 90% of is daily work activities on his iPad. It got me thinking about how little I expect from my IT department on a daily basis, beyond basic network connectivity. So I stated doing some math and came up with a number of ~ $150/month in costs for my expectations of the "perfect IT environment" for me.

    Is your IT Department serving too many meals?

    One of my favorite ways to get insight into sports analysis is via ESPN's "The B.S. Report" podcast. This week's episode (around the 8:00 minute mark) featured a great analogy by Mike Lombardi where he talks about how great restaurants only serve a few meals (or types of food), while average ones attempt to serve a little bit of everything.

    It got me thinking about the evolution that we're seeing within Data Centers. Not only are some of the forward-looking companies trying to consolidate the number of systems they have to manage, but their people are starting to consider if they need to evolve their skills. It's an interesting dilemma in that a reduction in the number or complexity of the systems should reduce costs. But will the expanded skill sets of people drive greater efficiency (less silos to make decisions or coordinate actions) or more mediocre implementations? Will the system consolidation happen faster than the skill-set expansion, or cross-pollination?