The untapped opportunity for data center optimization

For most agencies, storage efficiency lies below the surface in secondary data stores.

data center
 

Federal data centers are going to get a long overdue makeover. This past summer, the White House issued the Data Center Optimization Initiative, which calls for IT departments to consolidate, reduce energy dependence and lower costs for data centers across the federal government. This initiative reflects a growing reality for public- and private-sector organizations alike: data centers have become bloated and costly. In fact, the federal government spent $5.4 billion on data centers in 2014 alone, a number the White House hopes to reduce to $2.7 billion in spending by 2018 under the DCOI.

The initiative reflects a broader problem that's plaguing data centers and the storage administrators who manage them. Organizations' data stores are multiplying exponentially. They have responded by buying more outdated storage boxes and more point solutions applied as Band-Aids to solve siloed storage issues around individual use cases. In turn, these storage admins are staring down a jungle of software UIs to manage all these different data repositories. The systems are not only vast and expensive, but require ongoing upgrades and must be administered by highly-skilled and costly storage professionals.

There are new approaches to storage consolidation that offer hope to agencies looking to meet the guidelines set out by the DCOI, but to date they have largely been restricted to a small slice of the data center. If we think of storage as an iceberg, mission-critical data sits above the water in what's known as primary storage, requiring strict SLAs and representing about 20 percent of an organization's overall storage footprint. There's been a tremendous amount of innovation in primary storage in the last 10 years, which has been a boon for federal agencies running next-generation applications and other high-priority uses cases.

The other 80 percent of the data center, known as secondary storage (the bulk of that iceberg under the water), represents the non-mission critical data -- which includes backups, test/dev, file services and analytics. Innovation in this area has been stagnant and these environments are replete with point solutions sold by vendors promising quick fixes to solve individual use cases.

Reaping the benefits of hyperconvergence

Perhaps the most significant development in data storage this decade has been that of hyperconvergence, which has already had a measurable impact on the market for primary storage. In essence, hyperconvergence represents the consolidation of distinct silos of data center infrastructure into a single operating environment. This involves the merger of compute, network and storage, but also means that silos of infrastructure operating different application workloads are brought together as well.

Federal organizations like the U.S. Army already have adopted hyperconverged infrastructures and achieved impressive results. The Army cited fast and simple procurement, less need for rack space and lower administrative burdens among the cost-cutting benefits of adopting a hyperconverged system for its virtual desktop pilot. Notably, the Army decided to apply its leftover SAN infrastructure, originally intended for this virtualization project, to a secondary storage use case (continuity).

Looking beyond primary storage

Applying these principles of hyperconvergence to the 80 percent of the storage iceberg sitting below the surface could be the key to meeting the objectives set out by DCOI. A single UI to manage the data sprawl of secondary storage empowers administrators with a complete picture of their data assets to uncover new efficiencies that reduce both the storage footprint and infrastructure costs. Consider that the typical enterprise-grade organization copies data 10 to 12 times across individual storage appliances for secondary storage use cases. What if a single copy of that data could span all use cases?

Besides reducing the physical, logistical and energy demands of an organization's data center, hyperconverged secondary storage systems simplify and reduce the growing burden placed on storage admins -- and on the IT leadership that oversees them. Much of a storage administrator's time is taken up by provisioning physical layers of infrastructure to match the idiosyncrasies of individual workloads, not to mention ongoing manual upgrades. This repetitive and often mundane work can be reduced significantly through the software-defined approach of hyperconvergence, whereby compute and storage resources are managed by policy rather than logical or physical boundaries. This frees up storage professionals to focus on providing new value to their IT departments. In turn, IT departments also cut down on the administrative and staffing costs of managing their secondary data.

Federal IT departments have reason for caution as they plan their budgets, but the latest DCOI will continue to encourage the kind of efficiency that data centers in this sector sorely need. Many IT departments might not even think to look to the long-overlooked secondary data categories like backups, test/dev, analytics and file services of their data center, but the cost savings are hiding in plain sight, ready to be hyperconverged.