Channel Daily News

Managing the flow

Storage virtualization is a complex name for a simple idea: treat all storage devices and file systems on a network as a single, dynamically managed and provisioned resource.

This has turned the storage business into an exciting one because it is a technology whose time has come, offering new opportunities for the channel.

“It’s the whole idea of utility computing,” says Info-Tech Research analyst John Sloan.

“The idea is that it’s not about boxes any more. Instead of counting disks, if you can abstract all your storage and use it as a single resource, you can abstract its management, too. That promises the most efficiency and the biggest bang for your storage buck.”

Though storage virtualization is suddenly hot, it isn’t really all that new. It’s predicated on the same ideas that have allowed organizations to use terabyte disk arrays as single volumes for a number of years.

What is new is that enterprise storage requirements and the technology that allows large-scale virtualization have both grown to the point where it is not only possible, but it makes sense.

As the tools have leapt from the arrays to the network, the market for storage management — including virtualization — software has taken off. According to International Data Corp., storage software revenues grew 12.6 per cent in 2005 to reach US$8.9 billion.

The real selling point for storage virtualization is its transparency. Rather than matching data to disk arrays and disks to servers, it promises to make capacity and data available as they are needed.

“What customers are really interested in are infrastructures that don’t care what the underlying technology is,” says Ken Steinhardt, EMC Corp.’s director of technology and analysis.

“Storage is not a technology now; it’s a resource that uses technologies. The data you need now is on Fibre Channel, what you don’t need right away is on networked attached storage and servers with serial ATA, and your archives are on tape. The key thing, and virtualization is part of this, is that it’s all treated as a single transparent resource.”

Indeed, storage virtualization has walked in lock-step with server or CPU virtualization, the other big ingredient in the utility computing model. The promise of on-demand CPU availability, and the ability to run virtually any application on any available computing device, has become an attractive prospect to organizations whose computing requirements have been squeezed into traditional infrastructures. Why leave some processor time fallow if you can abstract your CPU resources across the network?

Lets data move
“And as you do that, you have to ensure access to data wherever you’re running your application,” says Allan Baratz, CEO of NeoPath Networks, a Mountain View, Calif.-based file management software vendor. “Data virtualization allows you to move data around the infrastructure as you move the applications around.”

Moreover, capacity is cheap, and the storage hardware market is booming. Organizations have been buying capacity like candy to solve their storage problems.

But more capacity can create new data management problems, according to George Teixeira, president and CEO of DataCore, a Fort Lauderdale, Fla.-based virtualization software developer. Those problems have created a market primed for storage virtualization solutions. “It’s a simple equation,” Teixeira says. “The more drives you have, the bigger the headache you have managing storage across the network, and the more you need virtualization tools to consolidate and manage storage.”

More than so many other hot new technologies, storage virtualization really does work. That’s good news for solution providers advising clients looking for ways to rationalize their storage investments in a tiered storage model.

Virtually every major storage vendor is making a play in the virtualization market, with all the requisite management tools. Hewlett-Packard has focused a great deal of energy in the last year on its StorageWorks virtual array management system, and acquired AppIQ to enhance its storage management offerings. EMC’s Invista networked virtualization solution resides on the company’s intelligent SAN switches. Third-party developers have also begun to carve a niche for themselves.

Small vendor play
“The smaller vendors also have been pushing better management of virtual storage,” Sloan says. “Compellent, for example, has a lot going on in its StorageCenter block level virtualization package including Data Progression — a sort of ILM (information lifecycle management) feature that moves blocks of data from more expensive to less expensive disks in the background with no downtime for the virtual storage volumes.”

And to a considerable degree, the promise of storage management tools that provide greater efficiencies and no disruption is what the excitement is all about.

“Companies want to be able to move their data onto the appropriate storage device transparently,” without impacting business processes and users, Baratz says. “That’s where this kind of technology is really useful. Users don’t really care about how data is available, or that it’s being backed up, as long as it’s non-disruptive. I compare it to a hybrid car engine: when you step on the gas, the car goes. You don’t select gas or electric. The technology works that out for you.”

There are, in fact, two models for virtualization.

Homogeneous virtualization is probably the easiest to deploy — and for vendors to deliver — because it simply treats a group of networked storage devices from the same vendor as a single unit. Heterogeneous virtualization, on the other hand, abstracts devices from a number of different vendors as a single storage utility. It can be more difficult to accomplish, but it has a powerful attraction for the vendors themselves.

“Think about it,” Sloan says. “You don’t just pool storage in an IBM array, but you can bring in the EMC array and the HP array as well. This is great for organizations that have invested in different technologies over time, but it’s also great for the vendor. If IBM is your virtualization vendor, then you only have to go back to IBM for your storage needs. It can offer a great competitive advantage.”

That is why so many Tier One vendors are rushing into the market with their own virtualization offerings. In effect, it enables vendors, integrators and service providers to leverage a position of dominance in corporate data centres.

Storage virtualization is predicated on the idea that networked storage intelligence is far more important than the commodity hardware upon which data actually reside. Consequently, there’s a real market advantage in pitching virtualization intelligence.

The market has been very receptive to the pitch. Sloan has noted a considerable amount of interest in the mid-size company market, particularly in homogeneous virtualization technologies. The market strength comes largely from companies investing in SANs to consolidate their storage pools as a first step to server consolidation.

The big question, however, is how, or rather, where it fits into enterprise SANs and data centres. “Virtualization is a natural evolution of putting more intelligence into the network,” says Ron Wofford, IBM storage marketing manager, but there are places all over the network where you can insert some intelligence. Do you manage it right from the arrays, or from the SAN switch, so it virtualizes every storage device it sees? Or do you drop a virtual storage management device like IBM’s SAN volume controller into the network?

“There’s really no definitive answer to the question because there are so many variables involved in answering it,” Sloan says.

“The technologies you use will differ depending on what your existing storage hardware investment happens to be, whether it’s an IP network or a Fiber Channel SAN, and what vendor you chose. The area that’s big right now is building the intelligence into the switches, like EMC’s Invista, but your options will be different if you go with someone else, like IBM.”

Choices vary with client size
And, not surprisingly, the options that can be offered will also vary depending on the size of the client and the size of their existing hardware investment.

The real excitement for heterogeneous virtualization comes primarily from large enterprises with considerable, and diverse, investments in storage technologies. But integrators and service providers should be looking well beyond the Fortune 500.

“To achieve traction in the mass market, the technology has to be adopted by the mass market,” says Wofford.

“It’s very likely that the real growth is going to come in the midmarket area. When you think about it, the ability to manage storage as a single resource can be very attractive to companies without large IT staffs.”

That, at least, is what DataCore’s Teixeira is hoping for. Storage virtualization has a strong play at the lower end of the market, where ideas like ILM and tiered storage are just words that you don’t have to think much about, and where third-party vendors have better prospects.

DataCore’s SANMelody software, in fact, is aimed squarely at organizations whose storage demands have pushed at the limits of internal drives on separate servers.

“Tiered storage and ILM are really pushed by hardware vendors who want to sell hardware to migrate to,” Teixeira says.

“Most small companies can’t do that. What they want is to be able to manage their resources more efficiently. Virtualization lets them have one class of storage that they can use software to add features to.”