Cloud Computing

Will Sandy's winds blow more agencies to the cloud?

sandy over midatlantic

Hurricane Sandy, seen here overspreading the mid-Atlantic and northeastern states, might help make a case for cloud computing. (Image: NOAA).

A blustery lady named Sandy may be the motivation some agencies need to finally jump aboard the cloud.

The full effect of Sandy -- the “frankenstorm” that started as a tropical system, grew into a hurricane, then collided with two other systems to become a different kind of devastating event --  is yet unknown. But as one former federal CIO said Oct. 31 agencies often have the newer systems and data centers because of their recent transition to the cloud. And those newer data centers, which are designed to handle natural disasters, likely rode out the storm without significant disruption.

“If they have sort of a weaker infrastructure, I would think [Sandy] would be a motivating factor. If they have a very solid infrastructure already established, I’m not sure it would make any difference,” said Gregg “Skip” Bailey, director at Deloitte Consulting LLP and the federal cloud computing lead at the firm. He was formerly CIO at the Bureau for Alcohol, Tobacco, Firearms and Explosives.

Other experts have the same sentiment. The cloud is not the answer to beating Sandy, but it is the design of the new data centers. The new centers are built to survive natural disasters with geographically dispersed and redundant systems. If one center loses power from flooding, there’s another system elsewhere, often far away, which connects agencies to their information.

The General Services Administration, an early adopter of the cloud, handled Sandy with its strong infrastructure, CIO Casey Coleman said Oct. 31. All of GSA’s IT systems remained completely operational during the storm. All users were able to conduct business through the Internet, email, collaboration and all business systems, even with a Verizon outage in Manhattan that caused a minor interruption in connectivity.

“GSA’s cloud conversion prevented complications from the Verizon outage, which would have led to interruptions in these services for GSA users in New York and New Jersey,” Coleman said. “GSA also used the capabilities of the cloud email platform to perform emergency response and recovery functions.”

In addition, GSA Acting Administrator Dan Tangherlini wrote Oct. 31 that the agency’s IT and telecommunications systems are available to the disaster relief and emergency response efforts. At GSA, more than 4,000 employees, in areas affected by Sandy, have been teleworking to maintain the continuity of agency operations, he wrote on the GSA Blog.

This is what today’s data centers are designed for, experts say.

Even in a major storm, “it should not be a big deal to continue to do your job,” said Susie Adams, CTO at Microsoft Federal.

Daniel Castro, senior analyst with the Information Technology and Innovation Foundation, said, through the cloud, agencies do not have to keep their data centers nearby as was necessary in the recent past. Washington-based agencies once had data centers in the basement of their agency headquarters and often another center somewhere in northern Virginia. If power in DC went out, the agency could send its IT employees to its backup center to continue operations.

However, Hurricane Sandy proved that model does not work, Castro said. But the cloud allows agencies to have data centers operating in different regions, even on the East Coast and the West Coast. Adams said Microsoft has data centers at least 1,000 miles apart. One client has a data center in Chicago with its redundant back-up center in San Antonio, Texas.

As Sandy struck much of the East Coast from DC to New York and beyond, many organizations found their data centers were not built with disasters in mind.

For instance, The Huffington Post news website went down Oct. 29 in the storm. One of the site’s data centers, based in Battery Park in New York City, was flooded, according to John Pavley, the site’s CTO. So the site switched to its backup center. But it was based in Newark, N.J., and the center soon had its own problems. Pavley said three separate circuits transfer data to and from the centers, which are designed to be redundant. All three went dead for an unknown reason.

To say the least, “it’s not smart to have both data centers just across the river,” Castro said.

This has been a case study for cloud computing and all the well-thought-out designs that come with it, said Chris Wilson, vice president and counsel for communications, privacy and Internet Policy at TechAmerica. Data centers are no longer built on islands, in flood plains, and often not in basements.

A 30-year-old data center built with a mediocre infrastructure likely does not have redundant power grids, redundant communications, and be dispersed geographically, Bailey said. In the cloud environment, they tend to come built with them.

“If you’re in a position of vulnerability and you haven’t taken some of these steps of [transition], then cloud becomes all that much more attractive” in Sandy’s wake, he said.

About the Author

Matthew Weigelt is a freelance journalist who writes about acquisition and procurement.

Cyber. Covered.

Government Cyber Insider tracks the technologies, policies, threats and emerging solutions that shape the cybersecurity landscape.


Reader comments

Fri, Nov 2, 2012 Steven Mohr Houston

True, redundant data centers have been around for a long time but only accessible to large corporations. What the wide-spread availability of Cloud Computing does is provide the same benefits in terms of security, reliability, disaster recovery and business continuity to small and mid-sized organizations - Steven Mohr, VP of Sales at Virtual-Q #QDesktop

Thu, Nov 1, 2012

The real question is how does a particular cloud provider architect, deploy, and operate their systems. Each is different. You should choose your provider based on having geographic separation as a default with no scheduled downtime for maintenance, patches, and upgrades ever. Compare each providers historical uptime statistics being careful to apply apples to apple methodologies in how it is calculated (some providers are more transparent than others posting theirs publicly, be wary of those who do not) to current legacy on premise systems in terms of total uptime, availability, and mean time to recover. How do you know if your disaster recovery solution is as strong as you need it to be? It's usually measured in two ways: RPO (Recovery Point Objective) and RTO (Recovery Time Objective). RPO design target should be zero, and the RTO design target is instant failover. This is extremely costly and hard to do with traditionally onsite hardware, software, and bandwidth dollars. Plus many companies found their own didn't even work under real conditions as evidenced by recent events.

Thu, Nov 1, 2012

This is not new; having redundant data centers has been around for a long time. It is the speed of connecting the data centers that has changed the climate that allows for them to be farther apart. This also allows for 100% duplication of data for critical systems. Replication tools exist to keep them in sync. CLOUD has nothing to do with it. It is good planning and execution. Please get off of the CLOUD bus and remember we had this in the 90's but no one wanted to get rid of their stove piped mentality.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group