the lectern banner

By Steve Kelman

Blog archive

Making better use of evidence

open eye and data

The front page of Sunday's Boston Globe had two articles (of a total of five on the front page) about efforts to improve social outcomes by gathering evidence about what works and what doesn't. The top article on the front page was headlined "Hospitals size up the lessons of Marathon attacks." Trauma care for Marathon bombing victims was, on the whole, magnificent – not a single person who arrived in an emergency room alive died in the hospital – but nonetheless the receiving hospitals are now in the process of after-action reviews looking through the details of what happened, which will produce reports that will be shared with hospitals around the country.

At this point, it looks like the main recommendations will involve quicker use of social media as an early warning system (perhaps by the police), improvements in how off-duty employees who simply show up to help can be better deployed, and improvements in the process for identifying victims who arrive at the hospital with no ID.

There was also an article on the front page called "Dartmouth tackles binge drinking culture," about efforts at the college in an isolated New Hampshire town with freezing winters to reduce the binge drinking that made the school the inspiration for the movie "Animal House."  The basic method they are using is to try a lot of small changes – various kinds of meetings with at-risk students, for example -- and track results to see what works and what doesn't. Some of the ideas stem from tactics that former Dartmouth President Jim Young Kim, a public health expert who is now head of the World Bank, had seen show promise in slowing the spread of AIDS or tuberculosis in developing countries. In the last two years, the number of students admitted to local hospitals with alcohol overdoses has declined from 80 to 31.

"Evidence" in government was once limited to partisan-sponsored studies that coincidentally supported the preferred conclusions of the interest group sponsoring them, or very lengthy and expensive randomized controlled trials to determine the impact of various government programs. In recent years, the range of techniques available to gather evidence has dramatically expanded to include quick experiments about practical short-term interventions, or even less rigorous but still potentially helpful methods of gathering evidence (such as benchmarking across different government offices). The arrival of more social psychologists, trained in experimental methods, into public policy and administration programs – and even government organizations such as the Behavioral Insight Unit in the UK government – means that this approach will become more and more the future.

As these articles suggest, this approach is becoming common in efforts to deal with social problems outside government. Sure beats a lot of the other alternative approaches for deciding on policy and management approaches. This needs to become a normal part of the toolkit for public policy and management.

Posted on Jul 30, 2013 at 8:04 AM


Reader comments

Wed, Aug 7, 2013 Michael Alexander Lexington, MA

There is a "born yesterday" quality to current-day discussions of "evidence-based" program decisions (cf. Stigler's Law). At least since the Johnson Administration, social science investigators (I knew of Abt Associates, but surely other analytical groups) were engaged. These evaluations may not have met the standard of randomized double-blind studies (also, not a 21st century invention), but they were far from seat-of-the-pants affairs. Cost-benefit analyses, which have been used for decades, ideally are also grounded in evidence. In science & engineering contracting it has been common (given adequate funding) to support two or three approaches in a Phase 1, and then "downselect" to the most promising approach. I don't argue that nothing is new under the sun, only that a little less breast-beating, a little more, humility, and less implied denigration of rank-and-file government analysts is in order. I get the impression that most "evidence-based" programs will not be financial big-ticket items (perhaps the total of health care studies will be a major exception). The multibillion dollar busts I know of were "sold" using "evidence," but the "evidence" was incomplete or not as strong as program advocates and potential contractor organizations claimed (see article on the border security fence in the August issue of National Security magazine for a recent example). If "evidence-based" decision-making is to have real impact, to be more than a fad that's applied to budgetary small-bore programs, it must be courageously and rigorously applied to complex big-ticket items.

Tue, Jul 30, 2013 Steve Kelman

Al, I don't know the evidence well enough -- my understanding is that there is such variance in programs that go under the name "Head Start" that the evidence is some kinds work, and some don't -- but yes, let me suggest the revolutionary point that if we are pretty sure an intervention doesn't work, we should stop spending money on it.

Tue, Jul 30, 2013 Al

Does this mean we should cancel Head Start? I don't think evidence and results are *always* the point . ..

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above