Storage costs eat up at least 11 per cent of IT hardware budgets, but there are plenty of ways to save money without sacrificing performance or security. In a new report called “Five Key Storage Strategies for a Down Economy,” Forrester analyst Andrew Reichman provides a road map for smart purchasing and maximizing the resources you already have. Here’s a summary.
1. Play hardball with vendors. The storage market is highly competitive, but vendors also know that the cost of switching can be prohibitive. This means your current vendor might have become complacent, particularly if you have been loyal for many years, expanding capacity without competitive bids, Reichman writes.
But as the economy gets worse, “storage vendors will be trying even harder to win new deals and protect their existing accounts from competitors trying to do the same thing,” Reichman writes. “Use this situation to your advantage by introducing a fresh sense of competition among the vendors you work with.” By undertaking a request for proposals bidding process, you can win discounts from your current vendor or discover a new, less-expensive vendor you weren’t aware of.
2. Avoid new purchases by reclaiming what you have. Wasted storage, not surprisingly, is a waste of money. Storage is allocated but not used all the time for many reasons. “Some applications and operating systems don’t lend themselves to gradual storage expansion over time; they require a large up-front allocation that may or may not be consumed eventually,” Reichman writes. “This tendency for over-allocation combined with limited ability to effectively forecast data growth in most organizations leads to a significant gap in the amount of capacity that is allocated versus actually used.”
Reclaiming wasted storage will often require application downtime, making careful planning necessary. Using storage virtualization is one way to migrate without disruption.
Other examples of wasted storage include: servers that have been taken off-line without its associated storage being returned to the free pool, and storage that’s “‘mapped but not masked,’ meaning it has been allocated within the storage array but not recognized by a server.”
3. Audit backup and replication configurations to cut waste. As important as disaster recovery is, the technologies that enable it sometimes lead to waste. “In a typical storage environment, there can often be as many as 10 copies of the same data — several days of full backups, a couple of snapshots, and a fully replicated copy at the alternative site,” Reichman notes. “Most backup systems have inadequate reporting capabilities, so it’s difficult for storage administrators to associate applications to their backup jobs and their retention schedules.”
An audit of backup policies and storage configurations can “eliminate unnecessary backup jobs, snapshots, clones and replication, and can return unused disk or tape media to the free pool to reduce future expenditure.” Another strategy is to review replication levels to make sure the right amount of storage is being allocated to each application. This work can be tedious but can also be done internally and for little cost.
4. Rethink storage network decisions. When you need high performance and availability, Fibre Channel isn’t the only option. Alternatives that can sometimes provide both lower cost and meet performance needs include iSCSI, the Network File System (NFS) protocol, and direct-attached storage, Reichman writes.
Oracle and VMware are throwing “their hat into the NFS ring,” he says, noting that more applications are supporting NFS as a way to connect servers to storage. Direct-attached storage is also a good alternative when the benefits of centralized networked storage are limited, such as when each storage device is dedicated to one application.
“While these options might not make sense for every application or every environment, cost-conscious firms should take a good, hard look at their storage network decisions and give some consideration to [these] approaches,” Reichman writes.
5. Use a tiering methodology that delivers results simply. Every cost-control strategy requires an up-front investment of time or money, but for some the ROI happens quickly. Tiering, on the other hand, has to be viewed as a long-term strategy because you won’t realize savings right away. For many users, the complexity of tiered storage has outweighed cost savings, but that doesn’t mean it can’t be effective.
“If tiering means buying a wholly separate platform in addition to the tier one infrastructure, it can take years to realize any benefit,” Reichman notes. “By shifting investments you would already have made to lower tiers instead, you can realize cost avoidance.”
Because of the down economy, more businesses are putting data on tier two storage right from the beginning, and only promoting it to tier one if the performance is unacceptable, he writes. “Buying cheaper, dense disks in the systems you already own makes sense for tiering without the added cost of a separate platform,” Reichman writes. Remember to keep it simple and consistent — having too many tiers and options makes it hard to manage the environment, which can negatively impact cost savings.”