Software-defined storage aims for ease of management
- By John Moore
- May 23, 2014
With networks and servers becoming software defined, it was probably only a matter of time before storage followed suit.
Sure enough, a number of vendors are now offering software-defined storage products. Data center storage stalwarts such as EMC and NetApp have staked their claims. A year ago, EMC introduced ViPR, which it describes as a software-defined storage platform.
Meanwhile, NetApp has constructed its software-defined storage approach on top of existing products. For example, the company's FAS8000 storage systems, unveiled in February, can be purchased with an optional software component that the company says paves the way for software-defined storage.
Other vendors marketing software-defined storage include Coraid, Nutanix and VMware.
Yet those moves have generated little response from federal agencies, and the technology appears to have a long way to go before it becomes a staple of government storage. A 2013 NetApp study, undertaken by Market Connections, found that a third of the government agencies and integrators surveyed were not familiar with software-defined storage. At the other end of the awareness spectrum, only 7 percent of respondents said they were very familiar with the technology.
Even large consumers of storage are not rushing to adopt software-defined products. The National Center for Supercomputing Applications, for example, operates a mass storage system at its National Petascale Computing Facility, but an NCSA spokeswoman said the center has not tried software-defined storage.
The technology can help organizations optimize their storage environments and cut costs, so even though the technology hasn't caught on yet in government, it could eventually capture the attention of budget-constrained federal IT managers.
Why it matters
Industry executives say software-defined storage can improve an agency's use of existing storage assets so that fewer new devices need to be purchased. That's a notable concern given ongoing efficiency programs such as the Federal Data Center Consolidation Initiative, now part of the Office of Management and Budget's PortfolioStat.
Agencies are judged on the number of data centers they continue to operate and their track record in closing redundant facilities. A March 2013 OMB memo states that agencies will be "measured by the extent to which their core data centers are optimized for total cost of ownership."
In addition, the Federal IT Acquisition Reform Act (FITARA), which cleared the House in February, addresses data center optimization by calling for an initiative to boost the "efficiency of federal data centers."
"Those [data centers] that don't get closed down get modernized," said Dave Gwyn, a vice president in Nutanix's federal division.
If FITARA becomes law, data center managers will need to consider data center footprints, power consumption and labor costs, Gwyn added. Agencies seeking to wring additional efficiencies out of their data centers will find the biggest opportunities in storage, which represents the lion's share of IT infrastructure spending.
"Storage is going to be where a lot of the answers are found," Gwyn said. "I think software-defined storage is going to have a very big play in the federal government in the coming months and years."
In addition, agencies that offer storage to others under shared services arrangements could find that software-defined storage makes it easier to provision resources.
Agencies of all sizes can take advantage of software-defined storage, said Jeff Baxter, principal architect for the U.S. public sector at NetApp, but service providers with multiple tenants will get the most immediate and obvious benefits.
Software-defined storage builds on the legacy of storage virtualization, which has been around for at least a decade. Federal customers include the military's Tricare health system, which used storage virtualization to facilitate a regional consolidation project in 2005. The idea was -- and is -- to create a single pool of logical storage from a number of devices. IT administrators can then provision that pool of storage as needed, thereby improving the use of storage arrays and making the whole affair easier to manage.
Baxter said NetApp has been offering storage virtualization for years -- the company's V-Series storage virtualization hardware debuted in 2003. EMC's Invista storage virtualization product dates from the same time.
So what has changed? Much of the difference between the first wave of storage virtualization and software-defined storage can be found in the storage controller. A controller serves as the brains of a storage array by managing communications between storage devices and external servers and networks. It also handles services such as replication.
Virtualized storage pools resources from multiple arrays and vendors behind a physical storage controller. Software-defined storage goes a step further by virtualizing the controller as well. Baxter said the technology makes it possible to slice each physical storage controller into multiple "storage virtual machines." In addition, multiple physical controllers can be pooled into one storage virtual machine.
With that approach, an organization seeking storage resources for a new workload can set up a storage virtual machine instead of buying a new array, Baxter said. Similarly, an agency running a multi-tenant private cloud environment can provision a virtual machine for each tenant rather than acquiring separate pieces of hardware.
"Storage virtual machines can be created to exactly match the service-level objectives of a given customer or workload," Baxter said. "It makes it a lot easier to manage and...provision."
In March, Nutanix announced that it had been granted a patent for a storage architecture featuring controller virtual machines that run on a system of distributed servers. The virtual machines operate on each server and aggregate local resources to deliver a pool of storage. EMC's ViPR software-defined storage platform, meanwhile, includes ViPR Controllers, which the company said virtualize the underlying storage infrastructure.
Another difference in the current iteration of storage virtualization has to do with the nature of the physical resources. In the past, virtualization focused on purpose-built storage hardware. Nutanix, in contrast, focuses on generic hardware.
"We are talking about devices that are truly commodity hardware," Gwyn said.
Nutanix software uses virtualization to make a standard server function as a storage device. Each server acts as both a storage and computing node in Nutanix's converged infrastructure approach.
George Symons, CEO of Gridstore, said the technology also aims to separate storage from data services such as replication and compression. The goal is to define storage services, policies and performance characteristics for individual virtual machines, said Symons, whose company provides software-defined storage for virtualized environments.
The most obvious obstacles to broader adoption of software-defined storage are the lack of knowledge about the technology and limited interest among potential federal customers.
"I believe there is a moderate to low interest in software-defined storage," said Milton Lin, solutions architect at Force 3. "It is in its 1.0, 1.1 or early-adoption phase."
Innovation fatigue might also play a role. Todd Cowles, director of the cloud infrastructure practice at Iron Bow Technologies, cited the consumption gap that occurs when IT companies launch technologies faster than organizations can adopt them. He said customers need to fully digest the technologies they already have before they look at something new.
"I have not seen a tremendous amount of interest...in the next generation of products," Cowles said.
Chris Howard, a vice president in Nutanix's federal division, said he believes software-defined storage is a compelling technology, but he acknowledged that customers might see other benefits as more important, such as the speed of deployment and the ability to combine servers and storage in a single box.
"We do talk a lot about software-defined storage, but I'm not sure that is the most compelling thing to the customers," Howard said.
Alan Marett, a contractor working as the server network team leader at the Army's Program Executive Office for Aviation, cited the ability to save rack space as among the top benefits of Nutanix's converged storage and software appliances. PEO Aviation uses Nutanix gear to support its virtual desktop infrastructure project. Originally, officials thought they would need an entire rack of servers and storage to meet their requirements. But the Nutanix appliances can handle PEO Aviation's workload using only 4U of rack space.
"We saw some rack requirements go down," Marett added.
Over time, as more agencies pursue software-defined storage, they might face other obstacles. For instance, they must take care not to mix management activities at different levels. Storage arrays traditionally are overseen at the array level via a management interface. Software-defined storage, however, creates a higher level of management, Lin said. Synchronization issues, among other problems, could arise if an agency managed the same devices at both the array and software-defined storage levels, he added.
"There will always be some issues with any new technology, and software-defined storage will not be immune," Lin said.