Solid state drives offer substantial benefits over traditional hard drives – they are faster, more reliable, use less energy and are quieter.
On the negative side, they have lifespans that are limited to an average number of writes per cell, and they can cost up to 70 times as much per gigabyte as standard hard drives.
So, where do SSDs fit in an enterprise network? In servers? In storage systems? Somewhere else?
To address those questions, we reviewed a variety of SSD-based products from seven vendors. Three of the products were Peripheral Component Interconnect Express (PCIe) boards – an Adaptec MaxIQ 5805/512 controller, two Apricorn PCIe Drive Arrays, and a FusionIO ioDrive.
In addition, we tested two SAN systems, a Compellent Storage Center 030, and a Dot Hill AssuredSAN 3730. Plus, we tested an HP BladeSystem c-class chassis with two server blades, each equipped with a 160GB StorageWorks IO accelerator module. And we looked at a Ritek 128GB SSD.
First, some definitions: There are two types of SSDs – single-level cell (SLC) and multi-level cell (MLC). SLC drives are faster, have longer life spans (about 100,000 writes per cell) and cost more.
MLC drives are less expensive, but have typical life spans of only about 10,000 writes per cell, making then generally inappropriate for write-intensive enterprise applications.
MLC drives can have a place in the enterprise for read-intensive applications such as serving videos or database lookups. They can speed throughput and access times at a lower cost than SLC drives.
SSDs are being used to replace standard hard drives in servers, but this is not typically the most effective way to use the drives. SLC-based SSDs are so much faster than standard hard drives that more than a couple of drives can overrun a standard storage controller.
Also, since SSDs typically are more reliable as well as more expensive than regular drives, using SSDs in a RAID configuration may not be the best use of the drives.
These issues are leading to new and different applications for SSDs. Some manufacturers are shipping PCI-X or PCIe boards that can either have SSDs (or discrete flash memory) directly mounted on them or attached via standard SAS or SATA cables.
Other vendors have created appliances that are placed between servers and storage, operating as cache to speed up access to the storage without having to add SSDs to specific storage arrays.
And some vendors have added SSDs to their existing SAN storage systems, either as cache or as another storage tier (often called tier 0).
This test covers all the categories of storage using SSDs except the appliances that sit between servers and storage. A number of vendors in that category were invited, including Atrato, Dataram, IBM, Schooner Information Technology, Solid Access Technologies, Storspeed, Teradata and Violin Memory, but none were able to get product to us in time for the review.
Our test bed included an HP ML370G5 server running Windows Server 2003 with external storage connected via Fibre Channel through a 2Gbps HP FC switch. Storage performance was tested with IOmeter running a mix of tests intended to show overall improvements in throughput, IOps and latency.
Each product was evaluated in the following areas: performance (throughput, IOps and latency), installation and documentation, ease of use, flexibility of configuration to suit differing network architectures, and price/performance.
We found that performance gains with SSDs were dramatic, with anywhere from double to 10 times the performance of hard-drive based systems. The two Fibre Channel arrays, from Compellent and Dot Hill, were limited only by the 2Gbps Fibre Channel interface and would have shown higher numbers with a 4Gbps or 8Gbps host bus adapter.
Internal storage has an advantage, as the PCI bus can sustain higher throughput and IOps than some external interfaces. With read and write throughput exceeding 700MBps, the fastest of these systems can shatter bottlenecks.
The ‘write cliff’ effect
One issue emerged during our testing that highlights a key difference between high-performance SSDs and consumer-grade ones. The consumer grade drives (the Adaptec, Apricorn, Dot Hill and Ritek systems) showed dramatic variations in response times (latency) under sustained write conditions, which some vendors are calling the ‘write cliff’.
While performance numbers with the other systems remained constant from the beginning to the end of the test, the consumer-grade drives showed drop-offs in performance once the drive was filled for the first time and the drive’s internal garbage collection and wear-leveling routines kicked in.
This only affects write performance – read performance is unchanged even with the consumer-grade drives.
These variations in response times were most marked with the single MLC-based Ritek drive and the Apricorn MLC-based array. But the other three (the Adaptec, Apricorn SLC-based array and DotHill system) showed the effect as well, with latencies varying from less than one millisecond to more than a second with the latter, and over 3 seconds with the first two.In contrast, the Compellent, FusionIO and HP systems remained under 12 milliseconds even under extended sessions of 100 per cent writes.
The enterprise drives don’t have this issue because of over-provisioning – a drive labeled as 146GB may actually have a capacity of 300GB. Some of the systems also have optimized wear-leveling algorithms that only move data around when the drives are not being heavily utilized.
The trade-off is that the enterprise drives are also more expensive – while the Ritek 128GB drive has an MSRP of $400, the FusionIO modules are $6,829.99 for a 320GB unit, and Compellent 146GB drives are $11,000 each, with a minimum of three required.
The best bargain may be the Adaptec 5805 controller with the MaxIQ package – it accelerates any internal storage in the system, not merely drives connected to the controller. While it was subject to the “write cliff,” adding more than one 64GB SLC SSD should help with this, and the cost for the system is quite low, at $645 for the controller and $1,295 for a single drive MaxIQ package.
If you have a write-intensive application like a database, you’ll want to look at something like the Compellent array, or the FusionIO and HP system cards, which offer very high performance, though at high prices.
If you’re looking for high performance in applications that are read-intensive or don’t require extended writes, any of the drives will perform better than most hard drives, and some of the solutions are quite inexpensive.
Summary
SSDs can produce higher performance than standard hard drives, though you will need to choose carefully to ensure that the SSDs you pick are matched to your application.
Applications that write large amounts of data over extended periods of time will need write-optimized SSDs, which are more expensive than consumer-grade drives. All of the products we tested offered substantial benefits over standard hard drives, though also at higher prices.
Harbaugh is a freelance reviewer and IT consultant in Redding, Calif. He has been working in IT for almost 20 years, and has written two books on networking, as well as articles for most of the major computer publications. He can be reached at logan@lharba.com.