Big Data

Can big data save the government $500 billion?

Big Data

Can big data help the government save $500 billion?

A survey of 150 federal IT executives conducted by Meritalk suggests big data has the potential right now to produce a smarter, more efficient government that could cut the $3.54 trillion federal budget by 14 percent, freeing up an extra $500 billion per year.

The survey findings are documented in a report called "The Smarter Uncle Sam: The Big Data Forecast." The eye-catching $500 billion figure comes from extrapolating what the 150 IT execs believe their respective agencies can save by successfully leveraging big data – a relatively new tech term meant to describe pulling insights from the analysis of large, sometimes seemingly unrelated data sets.

The report suggests a smorgasbord of savings in three main arenas:

  • Managing the transportation infrastructure;
  • Fighting fraud, waste and abuse;
  • Executing military, intelligence, surveillance and reconnaissance missions.

"When they look at how they are using data and how it could be used in all these kinds of fields, there are a significant amount of dollars that can be saved there," said Rich Campbell, Chief Technologist at EMC Federal. EMC Federal underwrote the Meritalk report.

"The data analysis of trends and allocation of resources is at a much more molecular level now than ever before, and the potential is becoming more realized on a day-by-day basis," Campbell said. "We're seeing more use cases and newer technologies, and the government is moving ahead and tackling the low-hanging fruit out there right now."

Clear-cut use cases in big data are not common yet in the federal government, though some have been doing big data since before it carried that label. The National Oceanic and Atmospheric Administration, for example, frequently analyzes large data sets with supercomputers to predict for weather. In the intelligence community, the National Security Agency's recently leaked methods for Internet and phone records collections is another clear-cut big data use case.

The Meritalk survey states that about one-quarter of the federal IT executives interviewed had launched at least one big data initiative, which is promising given the relative newness of the term.

Big data today is akin to what cloud computing was to feds in 2009, and big data has yet to have a clear-cut definition. That one quarter of agencies have some kind of big data initiative – even if it isn't a full-fledged project or pilot – means the government has done more than take notice of big data.

"In the last two to three years, we've gone through this whole transformation effect – it's morphed into a commodity that an organization can leverage from places people didn't anticipate before," Campbell said. "Agencies are using their existing resources, in some cases, to spin up Hadoop clusters and allowing applications to tie in. The landscape is really beginning to change."

About 70 percent of the federal IT executives surveyed said they believed that five years from now, big data will be critical to fulfilling federal mission objectives. But how do agencies get up to speed on big data in such a short time period? Even now, many agencies have not maximized cloud computing to its potential, and it came about considerably sooner than big data.

Much as infrastructure-as-a-service and storage-as-a-service models have taken off in both the private and public sectors as organizations look to save money, Campbell said data analytics must be thought of in the same vein before agencies really put it to heavy use. Before the government gets there entirely though, old IT methods -- like large, long-term contracts with single contractors for proprietary solutions -- are going to have to go. Agencies are also going to have to spend considerable time decide how they want to use big data, and specifically decide which questions they want their data to answer.

"The cost model of these services is going to have to change," Campbell said. "I don't foresee massive long-term contracts anymore – I see agencies looking more for cheaper ways to do data analytics. The cost is already coming down, and the processes and people are only going to get more refined."

2014 Rising Star Awards

Help us find the next generation of leaders in federal IT.

Reader comments

Mon, Jul 1, 2013 Roy Roebuck Springdale, AR

I have been advising the government since 1984 that "Enterprise Architecture", as a form of descriptive analytics, could enable dramatic reductions in operational and development costs, with corresponding increases in accuracy, responsiveness, transparency/integration, and accountability. Big Data can provide a "collected" set of content to support this "whole enterprise" description and improvement. Big Data fits at the early stages of Terminology Management by: •pooling the Content of diverse information sources/containers (e.g., structured, semistructured, unstructured, and links, at all scales from individual to group to organization to global authors); •to identify the collection of raw words and symbols, raw terms, and raw relations/triples/concepts present in that content, as well as the metadata (e.g., Dublin Core for author, UNC/URL, dates) of the containers (i.e., words/terms/triples of content with container metadata = quadruples/quads = Content+Source Identity). Then using the "Big Content" in the Big Data Container, discovering and collecting term definitions across the content contributing partcipants. Then using the collected triples to form various foundation "models" (i.e., noun-verb-noun) for: •activity-flow-performer-resource (Input control Output Mechanism - ICOM) models (i.e., process models) ◦Analysis of Process ◾predictive, or "when will it happen;" ◾prescriptive, or "how can we make it happen;" ◾diagnostic, or "why did it happen;" ◾descriptive, or "what happened." •Intelligence of Process (e.g., Business Intelligence) •Decision Support for Process (for process's measurement/control points at ICOM interfaces) •ICOM Resource models ◦data models (conceptual, logical, physical) and database schema (physical data model for specific DBMS) •knowledge models (e.g., metamodels, ontologies) ◦knowledge bases built using knowledge models (e.g., architectures) •value-models (e.g., axiologies, value-lattice, value-chain and value-stream type/class-models for reuse in Process Management, Lean, and SixSigma efforts) ◦Value-Lattices, Value-Chains, and Value-Streams for specific domains (e.g., individual, group, organization, global) Then use the increasingly refined "definition-based" models to build local to global Taxonomy. Then use the increasingly refined taxonomy to build local to global Thesaurus for domain, jargon, language, and interface for Master Data Management, Vocabulary Normalization and Standardization, and operational translation. Then use the increasingly refined Thesaurus as the means to move the participating community to use consistent terms and concepts to thus drive more understandable and reusable content, to start the cycle again.

Tue, Jun 18, 2013

Let us not forget history. Usually, when the Government declares that they have a way to save lots of money, when implemented costs usually go up not down. ObamaCare is the most recent example. When there is savings from some program, it is almost always much smaller than projected. So, at best, all should be very skeptical about any claims of cost savings from Big Data.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above