Fresh faces of storage
- By Terry Sweeney
- Aug 15, 2005
From the earliest days of enterprise computing until recently, what passed for innovation in the storage market was a predictable, unexciting march that moved in lockstep with a few big vendors' product development road maps.
But things have changed. Now storage is one of the most dynamic sectors of the information technology market, attracting bright new companies to an already crowded pool of capable vendors. More than merely disk or tape makers, today's new storage players are just as likely to be specialists in software, networks or security, with truly innovative approaches.
Of course, the catalyst for all this feverish activity is storage's new high-profile stature, not to mention the money that can be made serving those new demands. In the government, storage lies at the center of any number of hot-button issues, including e-government, business continuity and regulatory accountability. As
agencies expect more from their storage systems, government storage managers expect more from their industry suppliers.
With that in mind, we set out to look for storage vendors whose innovative approaches are shaking up the industry. We found five vendors AppIQ, Revivio, Sanrad, Tacit Networks and VMware that could change the way government IT managers think about, implement, manage and use storage resources. They are already forcing the dominant storage vendors to take note.
Collectively, the companies also change the nature of storage, which for so long has been considered little more than an afterthought for busy IT shops. The new technologies blur the distinction between storage as a network, application or business process, and underscore its importance in government networks.
AppIQ manages the managers
Multivendor interoperability has been a Holy Grail of sorts in networking for nearly two decades. Government users don't always want to build single-vendor networks, and as departments and agencies consolidate, heterogeneous networks are a fact of life.
In the storage world, similar dynamics play out in the form of storage resource management. Its goal is to allow administrators to centrally manage and apply data-handling policies to storage equipment from multiple vendors.
Vendors pay lip service to multivendor interoperability but typically drag their heels on making it happen out of fear of helping their competitors at the expense of their own revenue. Customers had to put up with this vendor gamesmanship for some time, at least until the Storage Management Initiative Specification (SMI-S) was established. SMI-S enables multivendor interoperability of hardware and software in storage-area networks (SANs).
Many storage vendors have now won SMI-S certification for their gear, but AppIQ got a head start by making an early commitment to the Common Information Model, said John Kelly, AppIQ's director of product marketing. That model later became a cornerstone of SMI-S.
AppIQ's early bet has paid off. The company has arrangements for Hewlett-Packard, Hitachi Data Systems, Engenio Information Technologies, Silicon Graphics and Sun Microsystems to package AppIQ's StorageAuthority Suite with their storage products.
Kelly said that a major selling point of the AppIQ software is that its features were built to work together from the beginning. In contrast, the company's larger competitors have acquired companies that offer various storage management features, but they have to be integrated into one product.
"Customers want all that already integrated," Kelly said. "Under a single integrated and standards-based platform, we can deliver file-level reporting, provisioning, SAN management, network-attached storage management, backup reporting and application management. Customers don't have to install and maintain multiple agents and multiple repositories." That cuts down on operating costs, he said.
Customers have been moving away from single-vendor storage networks for some time now and want more consolidated management, said Dianne McAdam, a senior analyst and partner at Data Mobility Group.
"Everyone's disk array has its own management software that the vendor supplies, and you really want one management package that can manage across all of them," she said. "AppIQ does that very well and has emerged as the umbrella storage management that gives you one pane of glass to manage everybody's storage."
Revivio gets protective
It's not enough to do simple backup these days. Many organizations now want to be able to quickly restore a server, desktop PC or document to how it looked seconds before a system or network crashed.
That is the promise of continuous data protection (CDP). Most vendors define CDP as a utility that permits quick recovery to any specific point in time, a feat possible because the CDP system records changes in data continually, as they occur, and stores them on disk, not tape, so they are quicker to retrieve.
Many vendors including Microsoft, Computer Associates International and FalconStor Software are promising to deliver CDP functions in their backup software, though some are focusing on providing the capability only for a single type of application, such as an e-mail system.
Revivio has distinguished itself by performing CDP at the block level, the most basic level of storage and independent of any specific application requirements. Revivio users can roll back the clock to a few seconds before a failure and extract data that allows them to restart various applications, recover lost data and even identify individual transactions that may have been interrupted in progress, said Kirby Wadsworth, Revivio's senior vice president of marketing and business development.
CDP is an alternative to traditional backup models that involve making limited or full copies of data at preset intervals usually not more than every few hours because most organizations don't have the resources to perform them more frequently.
Revivio sells its technology in the form of an appliance that includes the prepackaged hardware and software. It costs $250,000.
"The big deal about CDP is really pretty simple," Wadsworth said. "It provides much more granular protection than traditional backup, is much easier to do, costs less and uses less media. And you can recover in about five seconds."
Revivio has taken what is a near-instantaneous data-copying function usually reserved for expensive, high-end storage setups and moved it to less expensive Serial Advanced Technology
Attachment disk arrays, said Jon Toigo, chief executive officer of Toigo Partners International.
"It's a smart move, and I'm surprised that there isn't more uptake in the market," Toigo said, though he acknowledged that some users are reluctant to bring in third-party products for such a critical storage function.
Sanrad taps into IP storage
In the 10 years or so that they have been around, SANs have relied on Fibre Channel to provide basic server-to-storage connectivity and networking. Fibre Channel is robust and well-suited to data centers, but it can be extremely complex to manage and prohibitively expensive for small and even midsize organizations.
For years Ethernet vendors have tried to elbow their way into the storage networking market using Gigabit Ethernet switches, with only limited success. But Internet SCSI (iSCSI), a storage standard that wraps standard SCSI commands inside IP and uses tried-and-true Ethernet interfaces, has caught on with those who want networked storage that costs less than Fibre Channel.
What distinguishes Sanrad from other iSCSI vendors is that it's the only one offering an actual network switch. Most companies offer iSCSI-enabled disk
systems, said Marc Staimer, president
of Dragon Slayer Consulting.
Sanrad's "switch delivers the intelligence for storage services, like volume virtualization, replication and migration, based on an IP storage network," Staimer said.
It's uncertain whether Sanrad's switch-focused approach will become the dominant model for how IP SANs are built, but Staimer said customers will be attracted by the product's low cost, which starts at $12,500. The company already counts among its customers many state universities and Cranberry Township, Pa.
Fibre Channel SANs are overkill for most government users, said Zophar Sante, Sanrad's vice president of market development.
"The bulk of what we see are requirements for departmental or workgroup servers with between 50 to 100 [input/output operations/sec], writing at 1 to 3 megabytes/sec and costing below $20,000," he said. Such servers aren't good candidates for Fibre Channel, he added.
"What's disruptive about iSCSI is that it opens up an opportunity for every server and workstation to become part of a SAN architecture," Sante said, something that the economics of Fibre Channel wouldn't easily permit. Organizations can deploy an IP SAN for less than $5,000, and Sante said iSCSI runs at about 80 percent of Fibre Channel's speed.
By putting more intelligence into the network with its iSCSI switch, Sante added, Sanrad allows customers to use any type of storage subsystem Fibre Channel, SCSI or iSCSI. "Plug it into our switch and use all that storage as a giant pool for volume creation, snapshotting and storage creation by switch and network layer," he said.
Tacit goes wide
File sharing is one of the basic functions of any network. And the two most commonly used protocols the Common Internet File System and the Network File System show no signs of slowing or becoming obsolete. However, government networks are getting more geographically distributed, meaning they are more reliant on wide-area networks (WANs) to connect far-flung offices and users. Unfortunately, performance suffers when either protocol is pushed across a WAN.
Vendors' efforts to resolve this shortcoming are grouped under the heading of wide-area file services (WAFS). One of the earliest players in this market segment is Tacit Networks.
WAFS makes files retrieved from remote storage look like they're coming from a local drive, thereby overcoming WAN-specific problems such as latency, bandwidth and network reliability, said Noah Breslow, vice president of marketing at Tacit. Various WAFS techniques call on optimized protocols, compression and even file differencing, which compares a local copy to a remote one and only sends the changed pieces across the network.
"Our technology compares a cached copy at the remote site with what's at the data center and only sends the changes to make better use of WAN bandwidth," Breslow said.
The traditional approach to providing file services to branch offices is deploying file servers at every location, but that gets expensive and difficult to manage. As WAFS vendors are quick to point out, such an infrastructure must be provisioned, managed and backed up, which often requires a full-time staffer at each branch office.
What Tacit delivers are two WAFS appliances one that resides at the data center and the other at the remote location that perform the cache-comparing functions. The data center appliance costs $15,000, and the remote appliance costs $7,500.
But such nodes typically replace an infrastructure that costs $15,000 to $20,000 a year to operate, Breslow said. Moreover, some organizations spend as much as $60,000 just for tape media at all their remote sites.
Tacit's concept has won the approval of many customers, including at least one government agency. But perhaps more important to the company's prospects, storage networking vendor Brocade Communications Systems' recent decision to license Tacit's technology. Tacit has also struck interoperability agreements with EMC and Network Appliance.
Tacit's WAFS technology helps resolve the growing tension between centralized and distributed storage, Toigo said. "You get economies of scale with storage in one place, but it irritates remote users because they have to wait for downloads from the central repository," he said.
On the other hand, distributed storage can be expensive, and it opens organizations to data synchronization problems among their branch offices, Toigo said.
WAFS offers a viable alternative, but he cautioned that Tacit's approach is proprietary. And a grid computing initiative spearheaded by IBM and others overtake it. That effort aime to use remote direct memory access over IP to handle file sharing across WANs. But for now, Tacit has a good head start.
The virtues of VMware
No discussion of innovative storage technologies would be complete without mentioning virtualization, the idea of taking multiple storage systems and creating one large pool of capacity that can be divided and allocated to users and applications as needed.
Although industry analysts are bullish on the technology, customer adoption has been slow.
Many expect virtualization that's taking place at the server level to boost the storage technology. Server virtualization enables customers to run multiple applications and operating systems on the same platform.
VMware, now owned by EMC, is leading the charge with its virtual machine technology and VMotion product, which many leading storage vendors have certified to work with their products.
Server virtualization affects storage infrastructure, said Karthik Rau, VMware's director of product management. "Server virtualization lets you do more in terms of high availability and disaster recovery," he said, because customers can move stored data or storage applications to multiple virtual machines.
"As storage, server, network and data center virtualization matures, you won't need to think about where your data resides or what applications or resources it uses," Rau said. "You'll define a global policy that handles all that underneath," because data and the physical infrastructure are separated.
Server virtualization already enables storage managers to move a live operating system, its applications and associated system settings from one virtual machine to another without disrupting users a neat trick if administrators need to do some maintenance or load balancing.
"One of the things that VMware can do and no one else can with the same quality and precision is take a snapshot of an application and its data and put it in suspended animation," said Jonathan Eunice, principal analyst at Illuminata.
Snapshots can be taken every few minutes or every few hours and then used as benchmarks for application performance. That approach has implications for disaster tolerance and recovery, he added, as well as long-term application auditing.
"This permits a sort of data archeology if you wish, which in this day of compliance requirements and disaster recovery are no longer rarified, specialist things," Eunice said. "A lot of people want to have it, and it's a very impressive capability."
Even SANs can be a simple form of storage virtualization, Rau said. "But as storage virtualization evolves and becomes more powerful, I don't have to assume SAN data is on a single, shared array," he added. And that will enable customers to automate more functions and assign priorities to servers and data, giving them far more control over stored data than they have now.
Whether at the server or storage array level, virtualization could also shake up traditional pricing models. Per-seat or per-processor licensing fees may no longer be appropriate when resources are virtual.
Rau said virtualization puts pressure on vendors to move to utility-based pricing. Customers pay only for what they use, an amount that could fluctuate each month or quarter. So far, customer feedback has been mixed, Rau said. Some like utility-based pricing, while others think it's too complex, he said.
Sweeney is a Los Angeles-based freelance writer who has covered information technology and networking for more than 20 years. He can be reached at [email protected].