Cloud computing is not always helpful in data recovery

Post-disaster data recovery has become easier now that storage is automated

Newer technologies such as cloud computing can be a boon for post-disaster recovery of data, but they don't always help much, Dennis Heretick, former chief information security officer for the Justice Department, said at a FOSE trade show session today.

“Cloud computing can provide more reliability, but that should not be assumed,” Heretick said. How a specific cloud application fits within an agency’s or company’s disaster recovery strategy should be assessed by each organization individually, he added.

Overall, in the last five years, disaster recovery and business continuity planning have become easier and less costly because of the availability of automated electronic storage processes for critical data, Heretick said.

Even so, there are hurdles to overcome in developing and implementing a disaster recovery plan and process. Some of the main obstacles include the difficulty of obtaining management support for disaster recovery goals and identifying and obtaining support for roles for individuals to perform in executing the plan, Heretick said.

For the typical manager, “disaster recovery planning is important, but not as important as the day-to-day operations,” Heretick said. He suggests gaining support for continuity plans by linking them to specific high-priority missions of the company or agency. For example, assessing the business impact of the loss of specific types of data can show the effect on the agency mission if the data were to be lost or unavailable.

Heretick and Bill Nichols, senior systems engineer for Mitre Corp., outlined seven tiers of disaster recovery. In the lowest tiers, there is a loss of data, little or no backup, and limited recovery. In the middle tiers, there is manual or automated backup of data. In the upper tiers, there is fully automated backup of data and of applications.

One of the most important first steps in planning is accurately classifying the data by its importance. The next step is drawing up a plan and identify roles. Too often, people may be identified for a role without being knowledgeable or committed to performing the role. Those are issues that will be worked out through discussion and exercises, Heretick said.

A simple strategy that can be executed effectively is more worthwhile than a complicated strategy for which managers and IT employees are not fully on board, Heretick said.

“The pitfalls of a disaster recovery plan are too much detail, too much information, and people don’t ‘own’ their roles,” Heretick said.

About the Author

Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week.

FCW in Print

In the latest issue: Looking back on three decades of big stories in federal IT.


  • Shutterstock image: looking for code.

    How DOD embraced bug bounties -- and how your agency can, too

    Hack the Pentagon proved to Defense Department officials that outside hackers can be assets, not adversaries.

  • Shutterstock image: cyber defense.

    Why PPD-41 is evolutionary, not revolutionary

    Government cybersecurity officials say the presidential policy directive codifies cyber incident response protocols but doesn't radically change what's been in practice in recent years.

  • Anne Rung -- Commerce Department Photo

    Exit interview with Anne Rung

    The government's departing top acquisition official said she leaves behind a solid foundation on which to build more effective and efficient federal IT.

  • Charles Phalen

    Administration appoints first head of NBIB

    The National Background Investigations Bureau announced the appointment of its first director as the agency prepares to take over processing government background checks.

  • Sen. James Lankford (R-Okla.)

    Senator: Rigid hiring process pushes millennials from federal work

    Sen. James Lankford (R-Okla.) said agencies are missing out on younger workers because of the government's rigidity, particularly its protracted hiring process.

  • FCW @ 30 GPS

    FCW @ 30

    Since 1987, FCW has covered it all -- the major contracts, the disruptive technologies, the picayune scandals and the many, many people who make federal IT function. Here's a look back at six of the most significant stories.

Reader comments

Thu, Mar 25, 2010 Chris Canada

Thanks Alice for this article, Heretick said it best, simple strategies that work will always out perform overly complex strategies. Cloud computing definitely provides us with great options for data storage and redundancy, but they also provide far greater risk of overwriting, moving or defragging drive space over accidentally deleted files. This type of problem can greatly complicate recovery processes. I would suggest cloud computing as one part of a recovery scheme for any large organization, however having data stored at multiple locations(complete data) is always a priority. Also having multiple copies is something of great importance. Ultimately though the single most important factor in any data loss prevention plan is execution. Having the people at the ground level verify that backup processes are in place and working. Having them test their backups on a regular basis and audit any errors or failures in a way that they are made accountable for any problems. Most data backup plans that fail, do so simply do to lack of follow through once a plan is enacted. I would be a poor man if this were not true as I specialize in data recovery. Chris Armor-IT Data Recovery Saskatchewan, Canada

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group