In a world where technology is supposed to and in some cases has actually made complex tasks simpler for users, storage administration remains one of the last battlegrounds yet to be simplified in a cost-effective manner.
IT administrators can spend days manually re-allocating pools of storage
from one area of the network to another to accommodate a new application. However, the latest industry buzz phrase — storage virtualization, which has been utilized in the mainframe world for almost forever — is designed to change all of that and enable IT administrators to concentrate on other company technology needs.
“”Storage virtualization is the holy grail of storage networking,”” said Jennifer Ewen, senior market analyst for Evans Research Corp. “”It’s the ability to dynamically re-assign storage as required — it would be as invisible as memory management is.””
In the early days of storage there was direct attached storage — when a server was purchased, it came with a disk, and perhaps the ability to accommodate more disks. Or a company would purchase a RAID (redundant array of independent or inexpensive disks) rack. But in either case, the storage was attached directly to the server.
“”If you ran out of space on any given server you’re trying to take stuff from other servers and re-allocate and develop logical drives. You have to make sure you’re not impeding on any of the other applications’ space,”” says Ewen.
When more storage needed to be added, the IT administrator would have to let all of the servers and applications know the storage was or was not available for use. “”It was a very labour-intensive process for the IT department to add storage,”” says Ewen. “”It’s quite a big deal to do it that way and it’s very difficult to change once you’ve got it up and going.””
Then along came storage networking, which instead of attaching storage directly to the server, put the storage in a pool from which the servers could take from this reserve. Adding and subtracting storage, was then done at the pool.
“”While SANs [storage area networks] have helped to increase utilization of storage capacity by simplifying connectivity, they have also introduced another layer of management for the interconnection of switches, host bus adapters and fibre channel-enabled storage ports,”” writes Hu Yoshida, in his white paper Virtualization — the Promise and the Reality.
Enter storage virtualization, which is typically a software component of storage networking that constantly evaluates and monitors that storage pool: How much do I have? Where is space available? Am I running out, should I flag someone? Should I automatically page someone that we’re getting critically low on an application? If a disk drive fails can that automatically be re-assigned to someplace where there is space?
What about the company that has multiple sub-storage systems from various vendors?
“”You’ve [probably] got a real heterogeneous mix of storage [and] they all have their own software and their own quirks in terms of actually managing it,”” says John Perring, senior director, Canadian operations, DataCore. “”There are different skill sets, even different interfaces [and] you need to have highly trained people that are specific to that manufacturer’s product to be able to manage it. What virtualization does is it puts a layer between the server and all the underlying storage — regardless of who it comes from.””
“”It’s automating all of those processes that had to be quite labour- intensive and reactive, rather than proactive,”” says Ewen.
Some of the other promises of storage virtualization include: the dynamic assignment and expansion of LUNs (logical unit numbers); mapping of LUNs across heterogeneous storage devices; snapshot; heterogeneous remote copy; storage firewalls; and RAID.
Virtualization also enables better storage use across storage networks, according to the Aberdeen Group’s March 2002 white paper Virtualize This! What Virtualization May Really Mean for Storage. “”Historically, storage use — how much storage actually holds data — is less than 50 per cent because storage had to be pre-allocated to specific hosts and applications. With storage networking and virtualization, storage can be pooled and used more efficiently. Better storage use results in significant cost savings by providing more usable storage per dollar.””
While striving to automate cumbersome tasks, while simultaneously remaining as invisible as memory management is today, storage virtualization has a few stumbling blocks it must first conquer, most notably, standardization.
“”Standards for virtualization could make virtualization solutions a win for everyone,”” according to a 2001 Aberdeen Group white paper Storage Virtualization Needs Standards. “”Administrators would have greater tuning capabilities, yielding better storage performance, greater administrator productivity (and therefore lower administrative costs), and more robust, secure storage. Suppliers would have a common target to shoot for that would improve their products’ value and applicability to a wider range of situations.””
While the benefits may be great, having all of the storage networking manufacturers agree to operate with an open standard is not a simple task.
“”There is reluctance by other vendors in our industry to do that [conform to an open standard] because they’ve got a certain footprint in the industry. If they start sharing at this level, they [believe] they might lose footprint because every other software vendor can write to their sub-system easily,”” says Yves Coderre, product marketing manager for Hitachi Data Systems, Asia Pacific and International Americas.
“”It’s a challenge for vendors to make a tool that really is a generic tool that can be used across all the different technologies that are out there,”” says Craig Andrews, Canadian technical director of Veritas Software Canada Inc. “”You have to form individual partnerships with each vendor and then work with that vendor to understand its interface and build something to its interface. If there was a common standard for doing the management then they really wouldn’t have to do that.””
Hitachi Data Systems in particular is trying to make the CIM/SOAP — common information model/simple object application protocol — the SAN equivalent of the TCP/IP standard.
“”It’s really an endeavour to force a standard to all software and hardware vendors to provide a common messaging bus structure such that all software and hardware vendors can write to it,”” says Coderre. “”Therefore, you don’t have to exchange APIs [application program interface] codes that are specific to each vendor, which become very cumbersome [and] hard to manage.””
Instead, an open standard will enable companies to implement best-of-breed solutions.
“”The technology re-fresh cycle these days is about 30 months,”” says Andrews. “”If you’re locked into [one] vendor’s tools, then moving to another vendor is very difficult. [But,] if you have a generic [or open standard] tool set… then moving to another vendor is very easy.””
Not only will an open standard benefit the end users, according to Hitachi, but independent software vendors will also reap the rewards. It will enable many other ISVs to enter the virtualization market place without having to develop a set of APIs from each vendor, which is a very cumbersome and long development cycle, says Coderre.
“”The benefits of doing so [having standards] would be great, especially for smaller vendors, as these would now appear to be less ‘risky’ choices for users, who could use multi-vendor virtualized storage sub-systems and shop for price and attributes they want — without interoperability or support concerns,”” according to