A continuing controversy in cloud computing is its putative cost benefits; specifically, whether public cloud computing can provide cost advantages over computing carried out within a company’s own data centre.
Certainly, billions of bytes have been expended on this discussion, with steadfast advocates for both positions. I have often heard people confidently state that their internal cloud environment is cheaper than the public alternatives. I have even seen spreadsheets proving that an internal cloud has lower costs than the primary public provider, Amazon.
Briefly summed up, the argument in favor of private clouds goes something like this: If one examines the posted prices of Amazon and projects a given application’s resource demands to those prices, it seems clear that an IT organization could achieve lower costs running its own infrastructure.
The argument in favor of public cloud computing can be summed up in this manner: cloud providers have greater economies of scale than IT organizations, run extremely efficiently due to automation, and achieve the kinds of cost advantages that adhere to specialists in every field.
Three items crossed my desk over the past week that seem relevant to all participants in the controversy. The first was an interesting set of Twitter exchanges, the second a blog post by Gartner Analyst Lydia Leong, and the third a blog post by Forrester analyst James Staten.
In the Twitter exchange, one person posited the cost advantages of public cloud computing. The person taking the contrary position noted that if Amazon has such efficiency advantages, why hasn’t its pricing followed Moore’s Law. Moore’s Law, if you’ll remember, states that the cost of computing drops by 50 percent every 18 months. So, if Amazon is so efficient, why aren’t its prices plummeting?
In Leong’s blog post, she describes her interactions with clients regarding what actions they should take with respect to cloud computing. She notes that most of her clients aren’t interested in “best” practices; they’re just looking for “good” practices. Some of these interactions involve “helping a deeply reluctant client think of some token project that he can put on cloud infrastructure so he can tell his CEO/CFO/CIO that he’s done it.”
Staten’s blog focuses on scoring his 2011 cloud predictions, but I will hone in on one quote from an end user contained in his evaluation of the prediction.
Taken as a whole, these vignettes sum up why IT organizations are so challenged by cloud computing and why the economics of cloud computing are likely to be a key battleground upon which the struggle to decide which form of cloud will emerge successful in the future.
Why Amazon’s prices aren’t dropping
Turning to the Twitter exchange, it demonstrates a fundamental misunderstanding of the difference between pricing and costs, and why making decisions based on current pricing may expose oneself to danger in the future.
It is true that Moore’s Law typically means that the cost of computing drops rapidly over time. Extremely rapidly. This phenomenon is so powerful that it has transformed our economy for 40 years.
It is a mistake, however, to believe that Amazon’s costs (and, by extension, those of public cloud providers in general) are somehow not capable of realizing those benefits. It is also a mistake to believe that public cloud providers’ prices are likely to remain at current levels for the foreseeable future and that the right benchmark to compare internal cloud prices against is current public cloud prices.
There can be little doubt that public cloud providers are achieving more computing bang for the buck every day. It doesn’t mean that their appropriate response is to drop prices, however. Why is that?
I’m reminded of an anecdote I read a decade or so ago. IDG (owner of CIO magazine and CIO.com) had started a magazine in China. Its business was going fantastically. So fantastically, in fact, it had a problem: too much demand for ad space. They literally couldn’t produce a large enough magazine to print all of the ads that advertisers wanted to place. The magazine publisher described the issue to Pat McGovern, CEO of IDG, who responded thusly: “That problem is easy to solve. Just double your prices and let the advertisers decide which ads ought to get printed.”
Why is this anecdote relevant to the Moore’s Law pricing discussion? Just this: Amazon is currently in the same position as that Chinese magazine was a decade ago. Amazon is notoriously close-mouthed about how its cloud business is going, but recently it published some figures about how many objects its S3 storage service contains. Amazon has more than doubled the number of objects it stores over the past nine months. Now it stores over half a trillion objects in total, with the growth clearly accelerating.
As another data point, at last April’s CloudConnect conference, Werner Vogels, Amazon CTO, said that every day AWS is installing as much computing capacity as it used to run the entire company in 2001.
In such an environment, it seems likely that Amazon is challenged just to keep up with demand. This challenge may be evinced by the fact that last year AWS users ran into problems when one of AWS’s eastern Availability Zones would run out of capacity on a frequent basis.
The obvious inference from this challenge is that Amazon is not reducing its prices at a rate appropriate with “Moore’s Law” because doing so would exacerbate its main challenge: keeping up with demand.
A second inference is that, Amazon could reduce its prices, since it is experiencing lower costs due to the progress of Moore’s Law. The fact that it has not done so thus far means nothing about what it might do if the enormously high growth rates it currently faces moderate.
A third inference—and one absolutely critical to end users drawing up cloud plans—is that one should not assume a future cost advantage in one’s private cloud based on currently available public cloud provider pricing. The key question is whether any current private cloud cost advantage is sustainable over the long run.
My own view, stated several weeks ago, is that it’s hard for me to understand how a private cloud can achieve a cost advantage compared to a public provider, since it is hard to identify any cost input in which a private cloud has an advantage.
IT organizations miss the cloud computing boat
Turning to the analyst blog posts, frankly, they are dismaying. It’s clear that cloud computing represents a fundamental shift in the way organizations are performing computing services, but both analysts (in different ways) illustrate how IT organizations are failing to confront this shift.
Leong’s quote (noted above) carries the unmistakable mark of an organization not fully grappling with a fundamental issue, but rather trying to paper over organizational reluctance with some window dressing. Put another way, these organizations are constructing a cloud Potemkin Village in hopes of convincing higher-ups that they are really doing something about cloud computing. One cannot know what percentage of clients with which Leong is having these kinds of conversations (she writes that they’re not infrequent)”,” but it seems likely that it is somewhere in the range of 20 to 35 percent. That means that as much as a third of the organizations she speaks with are dissembling with regard to their real cloud efforts.
Staten’s post, by contrast, contains a quote that describes the opposite of Leong’s situation. The context of the quote is Staten’s discussion of the third of his 2011 predictions. The prediction was that hosted private clouds will outnumber internal clouds three to one. In his discussion, Staten describes the common motivation that his clients demonstrate when attempting to build their own cloud before trying an external provider. ‘”If we can’t get this built in our own environment, we’ll explore that option but I want to give my team a chance first,” he quoted an I&O leader from a large manufacturing client as saying.’
Really, this is astonishing. When addressing the most significant development in enterprise IT for the past decade, one large manufacturer is leaving the outcome up to chance and, one guesses, hope that an internal group is up to the task. Unsaid is how long the internal group’s effort will be pursued, and what criteria will be applied to evaluate the success of the effort.
In my experience, such efforts are never found unsuccessful; the criteria are merely adjusted to ensure that the initiative, no matter how poorly it actually accomplishes its task, is judged sufficiently successful so that no one ends up looking bad. Unfortunately, while poorly executed initiatives limp along, seeking criteria by which they can appear successful, other organizations pursue initiatives with higher probability of success—with the end result that they will outperform the first organization in the marketplace.
To my mind, this “let’s give the home team a shot” approach verges on irresponsibility. Senior IT executives are entrusted with the responsibility of making crucial IT decisions on behalf of their companies. Making a decision on the basis of wanting everyone in the organization to feel good does not meet that responsibility.
Certainly, at a minimum, if one does not feel completely confident in the internal capability of the organization to implement a real cloud computing environment (and not the barely plausible “token project” Leong alludes to), it is critical to proceed with an external cloud initiative in parallel with the internal one. Failing to do so risks finding oneself two or more years behind peer competitors.
Public Cloud or Private Cloud: Which Is Right For You?
So what should you do to address this environment, where the future pricing of public cloud providers is not certain, and the reliability of internal cloud efforts is lower than required? Here are some suggestions:
Avoid the tyranny of the spreadsheet. Just because some bright spark demonstrates in Excel that putting an application up on Amazon is more expensive than running it internally, don’t stop there. Evaluate whether the decision would be different if the external pricing were 30 percent lower. Or 50 percent lower. If your costs drop every year, don’t you think your external competitor’s do too? Part of every deployment decision in the future will be the economics, and you should include different external cost assumptions in your evaluation criteria.
Don’t set a low hurdle because of familiarity. Just because one of your subordinates is telling you that an internal cloud is the right way to go, as a senior executive your job is to assess the likelihood of the success of your initiatives and make decisions based on that assessment. If you’re not confident your organization can pull a complex infrastructure project like an internal cloud off, don’t go forward just to make your staff feel good. Set criteria for success. Put parallel initiatives in place, one internal, one external. Don’t worry about competition. Competition serves customers—in this case, you—well. Your charter is to achieve the highest possible results, and if some hurt feelings ensue, that’s the price of your position.
Put real governance in place. The real decision criteria regarding placement of an application is not going to be cost; it’s going to be privacy, security and compliance. For you to achieve everything you need with cloud computing, you can’t have deployment decisions held up for manual review by your compliance expert. You need automated systems that apply policy to achieve compliance, security and privacy—in other words, you need governance. This needs to be implemented near the start of your cloud initiative, because the assumption that it’s not really needed since all applications will run in the internal cloud is going to break down early.
Recognize the pressure isn’t going to let up. Just because you got a token project up this year doesn’t mean “the cloud thing” is over. We recently worked with one large enterprise whose CTO announced his goal was to be 100 percent cloud-based in 18 months. That may be unrealistic, but don’t imagine for a minute that it won’t be on his agenda every month from now on.
In a piece published over two years ago addressing the turmoil going on in the newspaper industry, Clay Shirky, using the rise of movable type printing as an example, noted that while in retrospect the outcome of revolutions seem obvious and neatly implemented, during the revolution, everything seems chaotic; the outcomes are completely unknown (and many times not even considered part of the predicted outcomes); the criteria by which success (or failure) will be judged are not clear; and the experience of the participants is disconcerting and disorienting.
I hear many people who are heavily involved in cloud computing voice the opinion that they wish the discussions regarding private vs. public, cloud security, SLAs, and so on would just go away. Each of them regards the questions as settled. What they fail to recognize is that for all participants in IT, cloud computing is this generation’s movable type. It’s clear that a revolution is underway, but the impact and outcome, for these participants, are far from established—even while the dislocation and disorientation are completely obvious. In such circumstances, the controversy is bound to continue. Following steps such as those outlined above give IT organizations practices that will stand them in good stead no matter how the times buffet them.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of “Virtualization for Dummies,” the best-selling book on virtualization to date.