Network World has conducted multiple tests of cloud-based services over the past year, and our overarching conclusion is that shifting compute processes to the cloud can help companies save money and become more flexible.
VMware makes cloud jumping easy
What’s even more potentially game-changing is the attitude of cloud proponents, who view compute power as just another disposable commodity. In other words, hardware, and to an extent, software and applications, are just a means to an end, rather than hallowed and holy platforms to be venerated through long and arduous life cycles.
The interesting question going forward will be what effect this transformation in how companies acquire and consumer compute resources will have on longstanding relationships between traditional IT vendors and their customers.
Our overall conclusion, after conducting a variety of tests, is that the cloud services providers currently do a good job of delivering on their promises. One area they could improve on is security.
What our testing revealedIn our initial tests of private external clouds, we reviewed these secure connections to privately hosted external resources. The prices were high, we felt, compared with simply buying supplemental hardware. Yet expanding needs and crimped budgets can also make the convenience of expanding internal resources externally, attractive.
We wondered as we reviewed products in this grouping, about security within the privately connected expansion ‘cloud.’ We negotiated a VPN connection with each of the vendors in the review, as a VPN offers an encrypted communications path link between an organization and its external cloud resources.
Each of the vendors was able to help us connect to our admittedly non-Cisco-like virtual appliance router, from Vyatta. That covered the data link, and ostensibly, data flowing across a cloud hosting facility’s network backbone.
Another type of data, however, flows across cloud instance storage resources. If the instance boots on a SAN, there aren’t easy ways to ensure that iSCSI, Fibre Channel, or other externally connected storage resources are encrypted and secured.
So, we suggest that public or private cloud storage resources be encrypted, either through operating systems or filing subsystems. Disks might be local and ostensibly untouchable, but if they’re part of a SAN connected by whatever means, assurances that data encryption is used is important.
Physical security
The next question regarding a cloud facility has to do with physical access to a system. In our initial privately connected cloud test, only Bluelock gave us tacit assurance that physical access to resources that we used would be subject to a rigorous protocol. Employees had to follow a specific procedure to physically touch our equipment. They showed us cameras located in their network operations center. Others likely have this capability, too.
The nervousness surrounding physical access is important. Systems can ‘accidentally’ be rebooted after having malware or logging code injected into them. Port mirrors can be added. All sorts of mischief can be imposed by personnel. We like doing this ourselves, as we have a hacker’s instinct for pushing sticks into hornet’s nests so as to test new running shoes.
We found that a commonality among privately connected cloud providers was a certification called Statement on Auditing Standards, Type II (a/k/a SAS 70-II) that’s a Sarbanes-Oxley-era AICPA-sanctioned presentation of an organization’s controls and procedures, which is audited/tested annually.
Such a stamp isn’t an approval, rather you have to read the crux of the controls and procedures, then know a bit about the auditing organization’s reputation to discern what the SAS70-II stamp means. Inside the SAS70-II documents, there should be a good list of controls and procedures, along with the imprimatur of a respected auditor. While it sounds complex, the process is understandable and the only seal of approval one may find for cloud providers. It’s a start.
Roping clouds
Our looks at public cloud automation and management tools were eye-openers. Managing cloud resources falls into distinct categories, “retailing” cloud resources, and life cycling cloud instances.
We covered “retail” in our recent private cloud review, where managed service provider (MSP)-like applications were compared and contrasted.
Cloud management applications, when used or offered for sale by organizations, package virtual appliances, cost-out and charge for resources for appliance or resource deployments, and keep track of who’s doing what, where, and to what degree of success (through monitoring). Clouds are all about virtualization packaging, deploying instances through a life cycle on a virtualized platform, and moving instances and their data around.
Developers have available to them, a commonality of approaches that revolve around bundling, secure deployments, storage or inter-instance data flow capabilities, and monitoring of executed cloud instances.
It’s clear that Amazon’s Web Services (AWS) has gained leadership in terms of a public cloud model and the platform that cloud automation applications we tested “know” out of the box. We’ve seen Amazon’s API set as the first one that cloud automation tools write to.
Rightscale’s community contributions, virtual appliances, and community wisdom gave mass to Rightscale’s diverse offerings. These ranged from grid-computing to one-off job control mechanisms that used the AWS and Rackspace cloud APIs. These same tools can be used against internal cloud resources as well.
Modeling grid applications was the strength of Tap-In-System’s cloud automation tool, as its monitoring capabilities were comparatively weak when we tested it. There’s a loose-and-fast approach to cloud automation modeling, but Tap-in-Systems builds logic modeling, which is initially tedious, yet pays handsomely at production time.
What surprised us about both types tools we tested (retail and life cycling) was the fact that systems security really wasn’t touched on heavily. We could build stupendously large and highly sophisticated infrastructure, lay down enormously powerful systems infrastructure, but security was one of the devils of the details that the automation tools often left to somebody else.
The storage story
So far, we’ve tested and built brain capital regarding private cloud extensions of public resources, tools to automate public and perhaps private cloud use. Next came storage. We tested Amazon’s S3 storage alongside a similar offering from Rackspace, together with a cloud appliance gateway system from Nirvanix, Nasuni’s cloud-based CIFS storage, and Egnyte’s WebDAV-based filing system. Here, security was stronger, if sometimes daunting to deploy.
There’s a misconception regarding cloud storage; one that’s onerous. It seems that some feel that the time of data life stored in the cloud is so short, and the time that it takes to crack VPNs is so long, that data stored is protected because of its more overall transient life in the cloud.
That might be true for some applications, and certainly there are applications where stored data theft has no value – for example, rendered video segments.
But customer data, and those data sets that fall under privacy law and regulatory authority have to be protected. There were varying degrees of encryption, ranging from none through simple SSL transport encryption (without storage encryption) through to full AES-256+. But there was no standard, in-your-face we-always-encrypt attitude. Much is still left up to the diligence of the developer or the administrator of the storage.
We found a lot of variance in system response for storage as well. The Internet as a storage transportation highway was found to be faster at night than during the day. But it won’t likely come near to even a USB 2.0 disk connection to an external SATA drive. The Internet makes a lousy transport for traditional disk data. To that point, our test results showed symmetry with Amazon’s S3, in terms of upload/download speed, but everyone else demonstrated that uploads were faster than downloads, sometimes by a large factor. None of them were blazingly fast, and yet none of them crawled, either.
Canonical’s Stratosphere
The Ubuntu Enterprise Cloud application arrived, and we tested it as our first foray into monolithic private cloud platforms. Here was an astute evolutionary move, we felt, that took a famous Linux distribution and bundled with just the right starter kit accessories to enable a private cloud on the cheap. If you know Linux, here was a path to make mistakes on your own through the pilot process, become familiar with how clouds work, and as openEucalyptus is used frequently with AWS, gain knowledge to re-use in the Amazon AWS cloud.
Civilians can’t use UEC, as it’s a construction kit. Yet spinning up ‘local cloud’ instances becomes simple, with the details left of how to provision virtual appliances and make them do work. Downstream, the work can be added to, repurposed, or automated through the use of other cloud control applications and management applications.
Management constructs
Our goal with the Private Cloud Management Tools review was to highlight the differing approaches taken to manage internal resources that have been poised towards cloud usage profiles.
Novell’s Cloud Manager treats an IT department as an internal MSP. Users get to see a listing of applications to choose from, and an idea of what’s involved to use them, along with all-important costing, which is an integral part of the cloud computing model.
Novell’s management application takes care of the details of creating libraries of virtual appliances, costing them out, and doling them according to either directory services rules, or through an internally-developed permissions set. It’s an internal cloud ecosystem, and a mature way to look at internal assets that have been repurposed and poised (hopefully after review) towards cloud use.
Another dimension we tested was contained in Citrix’s XenServer 5.6 – in its Lab Manager component. Hidden inside is a gem called the Self Service Portal/SSP that although not very sophisticated from an accounting/cost perspective, offered an enormous amount of specific virtual appliance and user behavioral controls – the most we’ve seen so far. The templates offered showed us a wide variety of possible design controls that could be applied. We were thrilled, despite the fact that it’s specific to XenServer only.
The Eucalyptus Enterprise package seemed to be a work in progress, standing on the shoulders of the openEucalyptus package that we saw in UEC. We have great hopes for it, but there’s much work to be done. The upshot is that work ought to be able to be easily repurposed between a private cloud and Amazon (and perhaps other) cloud services.
OpenNebula, by contrast, used an open source platform to spin-up private cloud resources with comparative ease, and with its online ecosystem, seems poised towards grid and many core computing needs. OpenNebula seems poised towards more scientific deployments, rather than a retail virtual appliance store.
Summing it up
The 800-pound gorilla missing from the corner is Microsoft. We haven’t tested Microsoft’s Azure cloud resources yet, but plan to soon. That means that much of what we’ve tested has to do with virtualization platforms that support Linux, and perhaps Windows Server editions as an afterthought. Much of the current cloud computing horizon sees the disposable computing infrastructure as a drink for thirsty if non-persistent applications.
Traditional line-of-business applications that organizations license and use for email/communications and accounting are also moving to hosted resources either as software-as-a-service or as MSP-hosted applications.
The cloud provider, automation, management and infrastructure business could go a long way towards adding to their growth by providing cross-platform, transparent and toughened security components.
Henderson is principal researcher for ExtremeLabs in Indianapolis. He can be reached at thenderson@extremelabs.com.
Network World (US)