Machine learning, agency missions and finding the shape of the haystack

Will computers that take information in, build hierarchies and learn over time diminish the role of human judgment in government?

Shutterstock image (by Your Design): Human head creating a new idea.

(Your Design / Shutterstock)

Getting computers to learn and process information the way people do has brought us self-driving cars, facial recognition technology, and is helping geneticists make better sense of the human genome.

But how can federal agencies use machines that essentially think for themselves? Will these computers that take information in, build hierarchies and learn over time diminish the role of human judgment in government?

Dan Chenok, executive director of the IBM Center for the Business of Government, says machines won’t be replacing people in policy-making any time soon.

“At the end of the day, most policy decisions are decisions of human judgment and machines can make those more effective,” Chenok said at a panel last week hosted by the Urban Institute in Washington, D.C., on machine learning in a data-driven world.

“[IBM’s] Watson and cognitive computing aren’t about replacing the decision power of people. They’re really about enhancing those powers,” said Chenok.

Better, stronger, faster

The Watson analytics technology became famous challenging Jeopardy winners. Alex Trebek isn’t the only one impressed by its ability to quickly process and understand data.

The Department of Veterans Affairs is testing Watson’s ability to rapidly sift through huge amounts of electronic medical records. The aim is to help speed up data-driven clinical decisions, including those involving PTSD cases. The VA estimates as many as 20 percent of veterans who served in Iraq and Afghanistan suffer from PTSD.

“Agencies are taking advantage of the capacity of machines to take information in to make better decisions. Some of the research we’ve done with the IBM Center has shown examples of how government leaders and government officials are using technology to provide better services,” said Chenok.

He referenced the Social Security Administration and the Food and Drug Administration as other examples of federal agencies using machine-learning and analytics effectively. The SSA has automated the process of delivering benefits to millions of people every day and the Food and Drug Administration says it is “modernizing” the drug review process through a program called JumpStart.

Launched in late 2013, JumpStart uses analytics to allow drug reviewers to assess the data quality of a submission much faster—simplifying and fast-tracking a process that could sometimes take months. According to the FDA, JumpStart runs a series of drug clinical trial data analyses early in the review process to assess data composition, quality, analyses options and tools for the analyses. The program’s aim is to help reviewers better understand the data and information to conduct an effective evaluation of the drug submission. JumpStart provides drug reviewers with data findings within two weeks of receiving a new drug submission, which the FDA says allows reviewers time to clarify issues or make requests of the organizations developing the drug before proceeding in the review process.

Limits, internal and external

The Virginia Bioinformatics Laboratory at Virginia Tech is working on modeling all the data in Arlington County to measure the “rhythm” of the county.

Stephanie Shipp, deputy director and research professor at the lab, said cities are beginning to embrace machine learning and it can be helpful with prediction “as long as you know which needle in the haystack you’re looking for.”

But Shipp also noted that machine learning can have its limitations.

“Machine learning is not always predictive or accurate,” said Shipp, citing Google’s flu tracker as an example.

Google Flu Trends, a site that monitors millions of Google search queries of flu-related terms and aggregates them to measure flu activity, hasn’t always been spot on with its predictions. It over-predicted flu cases during the 2013 season, reporting twice as many cases as the Centers for Disease Control accounted for.

Constantine Kontokosta, deputy director for academics at the New York University Center for Urban Science and Progress (CUSP), added cities are moving away from collecting data for the sake of it, toward thinking about what problem they’re trying to solve with the data.

“It’s about understanding the shape of the haystack,” said Kontokosta.

Panelists also discussed the importance of having policy leaders who understand the new technology.

“What are the baseline skills policymakers need to have?” said Kontokosta. “I think part of the challenge now in the discord is that policymakers generally don’t have the kind of technical skills, and as soon as they start hearing these terms, they shut down.”

He said one of his goals at CUSP is to train city leaders through the civic analytics program to have at least baseline knowledge of the technical tools so they can join the conversation.

NEXT STORY: NASA's top watchdog talks IT