Channel Daily News

Rethink the storage infrastructure

When Marty Garrison became chief technology officer of ChoicePoint Inc. three years ago, the storage situation was messy. That’s no small matter at a company that manages 16 billion records, such as background checks and insurance applications, eating up 2 petabytes of storage — that’s 2,048TB. And growing. Like many IT leaders, he faced lots of data in lots of silos.

“Storage had grown organically by project, and it was not managed in terms of cost. So we had eight to 10 [storage-area network] infrastructures as islands, none of which could talk to each other. We couldn’t share storage space across islands, and we couldn’t tier our data,” he recalls.

The silos meant there could be no cost efficiencies from bulk purchases, better utilization of the existing storage capacity or a unified management approach that would lower staffing needs. So Garrison created a central, common storage architecture and strategy.

He removed storage management responsibilities from local Unix administrators and hired dedicated storage experts to manage responsibilities globally. He consolidated the SANs into one, reducing management costs and allowing more efficient data utilization. He pared down the vendors to just a couple for each type of technology. That let him simplify management and buy in bulk to get greater discounts. When you buy hundreds of terabytes of storage each quarter, Garrison says, “it really does drive costs down.”

He also introduced tiering, which uses cheaper, slower drives for data that doesn’t need the highest level of availability. “Before that, we had done no performance testing to determine service requirements. The staff played it safe and got Tier 1 Hitachi and EMC disks for everything,” Garrison recalls — at nearly double the price per terabyte as Tier 2 or Tier 3 disks. Altogether, he has slashed storage costs by 40%, both for the disks themselves and for the management overhead. And he’s not had to significantly grow his staff despite escalating storage requirements.

Garrison is now exploring new ways to keep costs in check, including storage virtualization and single-instance storage. “Now it’s time to go into the next phase,” he says.

You must move to a simplified storage architecture to reduce total cost of ownership, analysts say. Even as the cost of new storage media decreases at up to 34% annually, the cost of rising capacity and service-level demands can exceed 60%, says Stewart Buchanan, an analyst at Gartner. “Enterprises need more business discipline in IT asset management of storage,” he says.

The good news: CIOs have more storage choices, and more mature choices, than they did just a few years ago. Some approaches that were once novel and untested, such as tiered storage and its related archival approach of hierarchical storage management, are now proven, says Nik Simpson, an analyst at the Burton Group. This is also true for the use of SANs.

One increasingly popular category of savings comes from replacing tape backup with disk backup (also called virtual tape libraries), says Dave Dillehunt, CIO at the integrated delivery network FirstHealth of the Carolinas. Tape capacity has not kept up with hospital storage requirements — about 185TB at FirstHealth — and physically managing the tapes has become too burdensome, he says. A caveat: One danger in relying on disk-based backup is the temptation to keep the data online (which can overload storage networks, because people will use the data if it is available). That’s why Dillehunt keeps the disk backup disconnected from the rest of the network.

If your storage needs are modest, tape does continue to make sense because the medium cost is so much less, notes Rich O’Neal, senior vice president of operations at the online rewards-tracking site Upromise. That’s the case for his 4TB of data. Of the established approaches, tiering offers the most significant bottom-line benefit, says Gartner’s Buchanan. It not only lets you increase the amount of cheap storage relative to expensive storage that you use but also forces you to understand the service levels for all your data. Then you can reduce costs by deleting or at least not backing up unneeded data. You can move rarely used data to offline storage to keep network traffic under control. And you can begin to manage demand by users, by showing them the entire data life-cycle costs for their requested applications. “Tiering lets you find the total cost of ownership of your storage,” he says.

A good target: Keep 30% of your data in Tier 1 storage and the rest at lower tiers, advises Burton Group’s Simpson, though the exact ratio depends on the performance and availability requirements for your data.

It’s critical for the CIO to make sure that the business takes responsibility for its data demands. “It’s not the role of the storage team to define the data requirements — that has to go to business management,” Buchanan says. But the CIO has to lay the groundwork by having effective asset management in place and exhibiting efficiency.

Among newer technologies that can help reduce storage costs, the most notable in recent years is iSCSI. A type of storage that connects drives to each other and to servers using a simple, easy-to-manage protocol, it lets organizations of all sizes deploy SANs. Before iSCSI, the major SAN option was Fibre Channel, but “Fibre Channel is not suited outside larger enterprises,” Simpson notes, because of its complexity and its high management cost.

The simplicity and fit of iSCSI for a larger range of organizations make it the fastest-growing interconnect technology for storage, reports IDC; the research firm expects 25% of all external storage sold in 2011 to be iSCSI based.

Regional accounting firm Schenck Business Solutions dropped its EMC Fibre Channel array three years ago because of its complexity, replacing it with an EqualLogic iSCSI-based SAN. “We had struggled with configuration and day-to-day usage,” recalls CIO Jim Tarala. Since then, the company’s storage capacity has increased about 330GB to 20TB. But he’s got a handle on overall cost. “We spent approximately 120% of what we did on the EMC gear (330GB) to get the EqualLogic (20TB) and our management costs are a maximum of 60% to 65% of what they were previously,” Tarala says. He expects to upgrade the storage to 30TB soon.

Associated Bank, which serves several Midwestern states, had a similar experience. In 2005, it needed to rethink its storage strategy to prepare for volumes of expected image data such as electronic check images and customer records, since the bank was implementing a program to let customers start an application at one branch and finish it at any other. When the storage initiative began in 2005, the bank had about 20TB of data; it now has 300TB.

The bank built its SAN using iSCSI arrays because it wanted an IP-based network to take advantage of its staff’s existing networking skills, recalls Preston Peterson, assistant vice president of infrastructure design. Still, just in case Fibre Channel becomes necessary later on, the bank made sure its Compellent storage arrays could support both Fibre Channel and iSCSI.

The move to iSCSI did raise questions, notes Kory Kitowski, the bank’s vice president of IT. For example, engineers from Microsoft and other vendors weren’t familiar with iSCSI, so they questioned unfamiliar server and SAN settings when installing or troubleshooting their own products. Internally, despite having IP-savvy IT staff, the bank still needed to reeducate the storage administrators. “We went through a major paradigm shift,” Kitowski says.

But the result was a 30% overall savings to what they had expected to spend using traditional SANs, Peterson says.

Even within large companies, there’s no longer a need to rely solely on Fibre Channel, says ChoicePoint’s Garrison, who uses either iSCSI or Fibre Channel, based on the specific storage’s availability needs.

As companies get these structural changes in place, both Simpson and Buchanan advise that for further savings, CIOs should begin looking at two emerging technologies: network storage virtualization and single-instance storage. Network storage virtualization moves management out of the arrays and other disk hardware, and implements it as part of the SAN’s operating environment. This lets IT treat all the disks as a virtual resource pool.

Single-instance saves on storage by keeping just one copy of data in your front-line systems (such as application servers), substituting pointers to the source for any copies, while the related de-duplication technology saves just one copy of a file or data block during backup or archiving and substitutes pointers for any later copies found. Long available for e-mail servers, single-instance technology is becoming available as a feature both in backup and archival systems and in front-line storage systems, notes Burton Group’s Simpson.

But several factors limit these technologies’ adoption, says Gary Fox, national practice director at consultancy Dimension Data.

Fox says that network storage virtualization technology proves complex to manage, despite vendors’ characterization of it as plug-and-play.

As for single-instance storage technology, data loss worries surround the pointer approach; most companies are in pilot mode for it, Fox says. Also, the technology comes primarily from start-up vendors, though Fox expects that to change. Still, despite its nascency, “We see a lot of interest from clients,” he says. After all, they also foresee continued unbridled storage growth.