So much in store
- By John Moore
- Aug 16, 2004
If you haven't evaluated storage technology in a few years, you're in for some surprises.
Not only is storage no longer chained to servers — as it had been for so many years — it now comes in tiers, each with its own price/performance story. Indeed, the bulk of the developments in storage today converge on tiered storage. Emerging technologies promise to assign data to tiers, manage them and even create new ones.
Within tiers, rivals attempt to harness innovation as they challenge one another for supremacy. That's clearly the case in archival storage, where disk technology, armed with new arrays, and tape technology, which is equipped with nanotechnology, slug it out.
The task for storage technologists is to cultivate those technologies that are ripe for delivering value while tracking those developments that are likely to yield benefits down the road. The following are some of the more important near-term and longer-term developments in storage media, hardware, software and networking.
Dense and denser
During the past 18 months, developments in storage media have been largely focused on Advanced Technology Attachment (ATA) disks, traditionally used in desktop computers. This high-
capacity, low-cost technology has quickly burst onto the enterprise storage scene, creating a storage tier between high-end disk and archival tape.
The ATA trend will continue unabated, buoyed by ever-
increasing storage density, industry executives say. ATA technology "is heading for some very large sizes," said David Black, senior technologist at EMC Corp. "I don't see any sign of that trend stopping."
Black said 500G ATA drives are in development. Some market watchers expect the first of those devices to be available later this year.
Disks of such size are changing customers' storage philosophies. Black said fixed conceptions about what does, or doesn't, belong on disk are "starting to fall by the wayside." He added that cost-effective disk technology is pushing more data to disk, as opposed to tape.
"The increasing presence of ATA-based storage solutions in enterprise settings previously dominated by tape media" ranks among the most significant storage developments, according to Charles King, a research director at Sageza Group Inc.
"From what I can see, tape will never disappear, but it will increasingly be focused on niche applications," King said.
But tape isn't disappearing yet. Tape vendors dispute the notion that the technology is dead due to the rise of inexpensive disk technology.
"That's certainly not true," said Rich Gadomski, director of marketing for product management in Fuji Photo Film U.S.A. Inc.'s Recording Media Division.
Nanotechnology, the science of manipulating materials at the molecular level, is bolstering tape's prospects. Fujifilm's Nanocubic technology has enabled the company's tape products to achieve greater storage density through the manipulation of magnetic particles on a nanometer scale, Gadomski said.
The company's first Nanocubic product, a tape cartridge for IBM Corp.'s 3592 storage device, has 300G of capacity, compared with 60G for previous-generation products. The transfer rate has grown as well, to 40 megabytes/sec. This month, Fujifilm will roll out a 300G tape cartridge under its label. The cost for such storage is about 50 cents per megabyte, compared with $2-plus per megabyte for Serial ATA.
Gadomski said tape will maintain its cost advantage into the future. "It will be tough for disk to keep up with tape," he said.
Other media developments are a bit further down the road, as is the case for holographic storage. Holographic storage encodes data onto a laser beam. The beam is split, and where the two beams converge, a hologram is recorded on a light-sensitive medium, according to InPhase Technologies, a company commercializing holographic storage. InPhase officials say their approach boosts storage density by recording data through the full depth of the medium, as opposed to just on the surface as current storage devices do.
Company officials already market holographic media and plan to ship a prototype holographic drive to development partners in October. "It will be the first true holographic drive that ever existed," said Liz Murphy, vice president of marketing at InPhase. Potential applications, she said, include space imagery, surveillance and archiving. Officials plan to have alpha and beta units available in 2005, and general availability is scheduled for 2006.
Additionally, Japan-based Optware Corp. reportedly will offer a holographic recording/playback device next year.
ATA and the NAS gateway
ATA products also are spurring new directions in storage subsystems.
One such development is densely packed arrays of ATA drives that are also designed to unseat archival tape. Referred to as Massive Array of Idle Disks (MAID), this technology has the ability to turn disks on and off as necessary. This feature conserves power and reduces heat, allowing for greater storage density.
Exavio Inc. last year debuted the ExaVault storage subsystem, which uses the MAID approach. Copan Systems, meanwhile, tapped Hitachi Computer Products (America) Inc. in July to manufacture its MAID product, the Revolution 200T, which is slated for general availability soon.
MAID vendors promise high capacity and fast access at prices approaching low-cost tape. Copan Systems' entry, for example, stores 224 terabytes in a single cabinet and has an access speed 10 times faster than tape, according to company officials.
Brendan Reilly, chief technology officer and vice president of consulting services at storage integrator Sanz Inc., said he agrees with the direction MAID technology is taking, but he questions whether the market is ready. "The market won't shift for at least two years in that direction," he said.
Michelle Butler, technical program manager for the Storage Enabling Technologies Group at the National Center for Supercomputing Applications (NCSA), has looked into disk-based storage as a tape replacement.
The disk arrays would provide greater performance — a widely sought attribute among NCSA staffers. "Our users don't want data on tape," Butler said. "They don't want to sit and wait for it."
On the other hand, Butler said she wonders about the detrimental effects of repeatedly powering a disk drive on and off. She also would like to see more capacity in MAID products.
"It's getting there, but [the products still offer] not nearly enough storage," she said, adding that NCSA will have "petabytes of spinning disk on the floor" by the end of this month.
MAID devices, however, aren't the only challenge to tape. A number of storage vendors offer products that use ATA disks and emulate a tape library. Such virtual tape libraries include Advanced Digital Information Corp.'s Pathlight VX, which presents disk as logical tape drives, according to company officials. Quantum Corp.'s DX 100 plays a similar role. Both products scale beyond 40 terabytes.
Another hardware development worth noting "is the growing interest in [network-attached storage] gateways and similar devices," Sageza's King said.
NAS gateways have been around for a few years, providing a method for melding file-oriented NAS and storage-area networks (SANs), which deal with data at the lower-block level. Among the recent developments in this field is the SAN filer. ONStor Inc.'s SAN Filer SF4400, which started shipping late last year, delivers file data from a variety of SAN-based storage devices. Supported storage includes EMC, Hewlett-Packard Co., Hitachi Data Systems and IBM.
In a report on NAS gateways, analysts at the Taneja Group referred to SF4400 as the "first true third-party offering designed to deliver file services on open-storage SANs."
Dan Smith, enterprise technology consultant for GTSI Corp.'s storage technology team, said ONStor's approach differs from vendors whose NAS gateways only interface with their own storage systems. He said the key benefit of the product is liberating disk storage previously held captive for use in a single NAS system. GTSI partners with ONStor.
Taneja Group analysts, meanwhile, anticipate greater competition in the nascent SAN filer space, according to the report. They point to Network Appliance (NetApp) Inc. as boosting open storage support for its gFiler NAS gateway. The product supports storage devices from Hitachi Data Systems and, most recently, IBM.
Ravi Parthasarathy, senior futures strategist at NetApp, said the company will continue to expand its support matrix. "It's not a technical challenge to support the other storage vendors," he said. "It's more of a business/support challenge." He said a solid support relationship must be established between NetApp and a given storage vendor to ensure "good service for our customers."
NetApp's February acquisition of Spinnaker Networks Inc. — and its NAS gateway and global distributed file system technology — may lead to additional developments in this area.
Taming tiered storage
Much of the innovation in software is about moving a multitiered storage environment from concept to reality.
Vendors seek to create architectures in which data is assigned to the storage device with the most appropriate price/performance. In such an environment, data migrates — according to its value — from costly production-class storage to nearline storage to least expensive archival storage. The objective is to get the most out of storage resources.
A number of software initiatives aim to achieve this vision of information life cycle management (ILM). At the front end of ILM, data needs to be classified to be channeled to the appropriate tier. Classification tends to be manual, but storage executives believe this situation must change.
"If an administrator has to manually classify every piece of data, the ILM game is over before it's started," EMC's Black said. "There's a great deal of opportunity in terms of being able to automatically classify data. That's a big one that is coming."
GTSI's Smith said that developing an effective way to help classify data is among the requisites for storage initiatives such as ILM.
Further along the development curve is automated data migration software. This category includes "applications that can enable users to do a better job of managing data through its life cycle," said Peter Gerr, analyst for storage systems, emerging technologies and market trends at the Enterprise Strategy Group.
Gerr said companies such as Arkivio Inc. and KOM Networks Inc. have products that tie together tiered storage environments. Arkivio, for example, lets administrators set policies for migrating data among storage tiers. "Essentially, the key here is to only retain data on the most expensive systems as long as it's needed for production or other purposes, and then begin to migrate this data through the other tiers," he said.
Other developments aim to create a new class of storage within the tiered architecture. Today, Fibre Channel storage is typically used to support high-end applications, and ATA is used for nearline or backup applications, NetApp's Parthasarathy said. Fibre Channel's characteristics are high performance and high availability, while ATA is lower on the performance and availability scale but costs less.
Most customers want something in between. "They say, 'I don't necessarily want high performance, but I want the high availability of Fibre Channel,'" Parthasarathy said. "That quadrant is not served so far today in the marketplace."
Creating this type of storage isn't simple. ATA drives fail more often than their Fibre Channel counterparts, he said, adding that ATA drives used in PCs will occasionally go off-line. NetApp officials, however, are working on software intended to "mask the irregularities associated with ATA storage."
Already, the company has released dual parity technology that protects customers from double-disk failures in a Redundant Array of Independent Disks group. Company officials are also discussing the concept of software capable of running diagnostic checks to determine whether a misbehaving drive has failed or encountered minor issues that can be fixed with software.
Parthasarathy said the goal is "making ATA drives enterprise-class so they can be used for primary applications, as opposed to nearline or backup."
An overarching trend in storage software is the transition from "device-specific to heterogeneous management," said Jeff Barnett, manager of market strategy for IBM's TotalStorage Open Software Family. But the storage sector has considerable catching up to do when it comes to minding mixed environments.
"The management of storage is probably five years maybe even 10 years behind where network and server management is today," Barnett said. "In the storage world, there is very little interoperability from a management standpoint."
But officials at IBM and a host of other storage vendors are backing the Storage Management Initiative Specification (SMI-S), formerly known as Bluefin. SMI-S aims to become the common application program interface for storage hardware and management products.
The first version of the specification was published last year, and the initial wave of compliant products passed conformance testing in April. The Storage Networking Industry Association guided SMI-S development. All products available from member companies after 2005 are expected to adopt the SMI-S application program interface.
"We are moving totally away from having proprietary management tools that only work through unique interfaces," Barnett said.
Evolution, not revolution
When it comes to storage, networking may not be the focal point of innovation.
"It used to be the private networking industry took the lead in defining which way things would go," said Vijay Samalam, executive director of the San Diego Supercomputer Center (SDSC). Networking firms had the cash and the customers ready for the next wave of technology.
But things have changed since the telecommunications bubble burst. "The network industry