New options for never-say-die IT

Many federal agencies can confidently proclaim “mission accomplished” for using virtualization to reduce the number of computer servers they operate, thereby saving on hardware, administration and power costs.

However, server consolidation should not be the first and last stop on the virtualization tour. The technology — which breaks the one-server-to-one-computer bind by enabling multiple virtual servers to run on a single physical machine — offers a similarly compelling set of benefits when applied to business continuity: the provisions that get IT operations running again after unplanned disruptions, whether natural or man-made.

“Virtualization can provide a much more cost-effective way to create redundancy,” said Lauren Whitehouse, a senior analyst at the Enterprise Strategy Group.

But there is a lot more to using virtualization in this way than meets the eye. Data management and system design challenges, financial constraints, and governmentwide policies about the future of data centers will all influence the options agencies have for using virtualization as a price-slashing business continuity tool. The opportunity is big, but IT executives are going to have to roll up their sleeves to make it happen.

Why it matters

Most IT executives use the “nines” to measure the amount of time their systems are available to end users. Five nines mean that systems are running for 99.999 percent of a given time frame. That translates to about 5.5 minutes of downtime in one year. One nine, or 90 percent, is 36.5 days of downtime. Nines and IT costs have always been directly related, but virtualization can flatten the curve significantly.

“Sometimes the cost of trying to get the next nine is not worth the value, but that said, if I can get it by following the trends [such as server virtualization], I’m going after it,” said Capt. Shawn Hendricks, program manager at the Naval Enterprise Networks Program Office, where he oversees the Navy’s 750,000-user Next Generation Enterprise Network. That program has used virtualization to reduce the size of its server farms by 40 percent.

At the Census Bureau, uptime is gaining renewed attention as the agency seeks to move to a virtual desktop infrastructure (VDI), said Jason Schaufele, the bureau’s assistant division chief for systems.

With VDI, individualized computer resources are managed centrally on data center servers rather than on full-featured PCs sitting on users’ desks. Users can access their server-based virtual desktops from a variety of devices, including traditional PCs, laptop and tablet PCs, and even smart phones.

“This means we will have an application — VDI — that cannot withstand any downtime,” Schaufele said. “If I say all 8,000 Census employees can’t work [because servers are down], I’ll be looking for a job.”

The bureau, which has virtualized about 40 percent of its servers, is exploring ways to use server virtualization to support its continuity objectives.

The fundamentals

The beauty of virtualization for business continuity lies in the way servers are managed and decoupled from the machines they run on. For example, a Web server in the traditional world typically means a physical server running an operating system and the Web application software, plus all the associated network and storage connections.

With virtualization, that Web server becomes a virtual machine, and its workload is encapsulated into a compact set of files that can easily be moved from one physical server to another as needed. If the original physical server goes down, administrators can just fire up a copy of the virtual machine that has been stored on a different server.

Redundant server capacity — whether it’s two slots away on a server rack or in a data center on the other side of the country — can become a failover resource shared by multiple virtual machines.

“Virtualization allows you to achieve high availability for a greater number of systems for a fraction of the cost if you consider what it might take to provide ‘like’ hardware for each of your systems,” said Mike Rosier, senior systems administrator at the Energy Department’s Fermi National Accelerator Laboratory, which has virtualized about 60 percent to 70 percent of its servers that were candidates for conversion.

Related continuity benefits include reducing or eliminating single points of failure within an infrastructure and the ability to recover downed servers more quickly and with less manual effort, Whitehouse said. In addition, numerous vendors offer tools for managing and automating virtual server recovery.

Reducing labor requirements for those kinds of tasks is a big goal for the Navy given the enormous size of its network operation and tightening budgets, Hendricks said, and server virtualization helps.

Virtualization also lets IT administrators test their continuity capabilities and procedures more easily, a big bonus given that many organizations avoid such tests for fear of disrupting normal operations, Whitehouse said.

The hurdles

Many technical, financial and organizational factors will affect an agency’s options for building a more affordable, fault-tolerant IT infrastructure.

Data — or, more specifically, data that is growing like a marshmallow in a microwave — will be issue No. 1 for many agencies because server applications are useless without access to data.

“It’s easy to push around a [virtual] server because it’s a couple [gigabytes], but some of our folks have data stores that are 6 or 7 terabytes, and it takes a lot to move that from one location to another,” Schaufele said. “And it’s expensive to have that amount of data in a redundant position.”

Even with virtualization helping, every step up in continuity capability, in terms of decreasing the time to recovery and the amount of time data is at risk, comes with additional cost.

The organization’s leaders should decide how much protection from disruption a business operation requires and how much they are willing to pay for that. Those decisions can then guide IT plans. However, pinning down business managers about their true tolerance for downtime can be hard, so many IT officials lack that important input.

“You talk to program managers, and of course, they can’t have any downtime at all,” Schaufele said. “But as you climb the food chain and show the costs associated with these various service levels, you find out they don’t quite need what they think they need.”

The Census Bureau’s plan is to test different tiers of virtualization-based continuity capabilities and then extrapolate the cost of ensuring that downtime does not exceed one hour, for example. After that, the IT department should be able to have more fruitful discussions with business managers about their requirements, Schaufele said.

Another complicating factor for agency IT officials is the Federal Data Center Consolidation Initiative, which is pressuring agencies to close down some data centers and stop opening new ones. One of the best tools for increasing resiliency is having backup capabilities at an alternative site, preferably some distance from the risks, such as hurricanes or floods, that threaten the primary site. But by urging greater concentration of IT resources at fewer sites, the consolidation initiative could limit continuity options.

Virtualization does not make business continuity a zero-cost no-brainer, but it can drastically enhance an agency’s ability to bounce back from life’s surprises.

Next steps: Data’s safe deposit

By making servers as easy to relocate as moving a file, server virtualization can bolster business continuity capabilities and lower their cost. But one of the sticking points is the data that those vagabond servers must access.

Enterprise data storage equipment is expensive, and not all agencies can justify setting up redundant storage pools at secondary locations — if they even have secondary sites.

Other options for meeting that challenge are starting to take shape, said Joe Brown, president of Accelera Solutions, a virtualization solutions provider with numerous government clients. “We encourage agencies to look outside the box and let the economics drive the decision,” he said.

Among the options that Brown and others highlighted:

  • New storage replication solutions. Quest Software, Veeam, Zerto and others provide virtualization-friendly solutions for data backup and replication that are storage-system agnostic, which is a fancy way of saying the storage is much cheaper than what the big vendors charge.
  • Shared services and private clouds from other agencies. Numerous agencies, including the Social Security Administration, are exploring ways to build cloud-based storage services that could be offered to other government offices.
  • Public cloud offerings. Relatively inexpensive data storage services from Amazon and others can be coupled with cloud-based platforms that can host an agency’s virtual machines in a pinch, thereby providing a temporary harbor when in-house IT operations go down.


  • Government Innovation Awards
    Government Innovation Awards -

    Congratulations to the 2021 Rising Stars

    These early-career leaders already are having an outsized impact on government IT.

  • Acquisition
    Shutterstock ID 169474442 By Maxx-Studio

    The growing importance of GWACs

    One of the government's most popular methods for buying emerging technologies and critical IT services faces significant challenges in an ever-changing marketplace

Stay Connected