For the first few decades of the computer era, the well-worn migratory path of a technology was from the business to the home user. However, in the last few years this has begun to change, as several technologies well established in the consumer market are only now beginning to make inroads in the enterprise. For example, while many corporations struggle to profitably leverage Web 2.0 technologies, your average teenager seems to have no such difficulty. Another prime example is flash-based solid-state drives (SSDs), which have replaced mechanical hard drives in many consumer devices. SSDs have no moving parts, offer faster access speeds and consume less energy than traditional, spinning hard drives. While ubiquitous in MP3 players and, increasingly, in laptop computers, the technology is exceedingly rare in data centers. Even proponents of SSDs concede that there is ample reason for this. Only in recent years have vendors begun to cobble together flash memory modules into SSDs of appreciable size. "Data centers don't move lighting fast," says Dean Klein VP for memory system development at Boise, Idaho-based NAND (flash) memory manufacturer Micron Technologies Inc. "Nobody wants to take a chance on an unproven technology." Moreover, cost is a consideration. The price per gigabyte of flash memory is expensive, especially when compared to the traditional, spinning hard drives that currently dominate data centers. "They are small, and they are expensive," says Chris Pollard, director of storage engineering at New York-based Metropolitan Life Insurance Co. (Metlife). "We don't have any flash drives on the floor but we are strongly considering them."
Indeed, any insurance company considering moving toward SSDs will have much to consider. Much of this process will entail wading through the hype that surrounds SSDs, and actually figuring out how best to use them. "SSDs have a compelling value proposition, but the data center and storage architectures have been focused on hard drive technologies," warns Joe Unsworth, lead analyst at Stamford, Conn.-based Gartner Inc. Yet, on purely technical terms, much of the hype seems justified. SSDs are fast-offering extremely low read latency and performing more than 100,000 input/operations (i/o) per second. These low seek times have big performance implications, as slow i/o has become a bottleneck as processors become ever more powerful in the multi-core era. While traditional hard drives may saturate CPU cores at about 25%, some SSDs can saturate them at 100%, Unsworth says. "Traditionally, hard drives will give a response in 15 to 20 milliseconds," says Greg Goelz, VP of marketing for Milpitas, Calif.-based Pliant Technology, which makes SSDs geared for the enterprise. "In CPU clock cycles that's an eternity. SSDs give it back in microseconds." In addition to speed, the other primary value proposition for SSDs is that they consume less energy than spinning drives. As data centers look to trim power and cooling costs, hard drives spinning away at 15,000 RPM are tempting places to start. What's more, SSDs, at least in theory, will enable data centers to pare back on the over-provisioning of drives. Redundancy is currently the norm in data centers as hard drives are put into Redundant Array of Independent (or Inexpensive) Disks (RAID) setups for reliability purposes, with one drive mirroring another in case of mechanical failure. "SSDs don't require the level of redundancy you would need for rotating media," Klein says. Additionally, for performance reasons, 15,000-RPM hard drives are often "short-stroked." This technique entails utilizing just the outer edges of a hard drive's spinning platter to achieve maximum performance, at the cost of using only 10% to 15% of the drive's storage capacity. "It's a very inefficient way to get high performance," Unsworth says. By eliminating the need for short-stroking, and by cutting back on some of the redundancies, data centers could reclaim lost capacity, and save energy on more than a one-to-one basis. "When you re-architect, you start to see real energy savings," Goelz says. Klein tempers his enthusiasm on this point. "At this point, you'll still need RAID arrays, but going forward there will be architectural improvements that hit the data center to mitigate that," he says. "But, that's still a ways out."
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access