By Steve Kelman

Blog archive

Government steps out on 'nudges'

paths (cybrain/Shutterstock.com)

Some blog readers are probably familiar with the word "nudge" as used to describe various initiatives, often undertaken by government, for encouraging individual behavior in line with a government policy goal in less-intrusive, less command-style ways. In the context of public policy and management, the phrase entered our vocabulary with the book Nudge, a 2008 bestseller written by the Nobel Prize-winning economist Richard Thaler and the super-prolific law professor Cass Sunstein.

The idea is that, unlike the case with a government regulation, "nudged" people are not directed or forced to make a certain decision. The choice they make is left up to them. What the government does, however, is arrange the choices available to individuals in a way that unobtrusively favors a choice reflecting the social value the government wants to encourage. Thaler and Sunstein define a nudge as any aspect of structuring the alternative choices people face "that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting the fruit at eye level counts as a nudge. Banning junk food does not."

A classic example of a nudge is for people deciding whether to contribute money from their salaries to save more for their retirement. It turns out that if the choice is arranged as "opt in" -- you do not contribute any salary unless you decide to do so -- far fewer people will make these contributions than if the same choice about participation is arranged as "opt out," where you contribute salary unless you actively choose not to do so.

Another nudge involves where to put vegetables versus desserts in a school cafeteria. This is why the nudge approach is often called "soft paternalism" -- with "hard" paternalism, the government forces a decision on people that is intended to be for their own good. With nudges there is no coercion, even for a "good cause." Nudges thus are a very promising way to set up no- or low-cost interventions that improve the decisions program participants make, and deserve much more use in government.

I recently discovered, thanks to a column by John Kamensky, my old friend from re-inventing government days in the 1990s who is now at the IBM Center for the Business of Government, that GSA has been working for a few years to promote use of nudges by the federal government through a small outfit called the Office of Evaluation Sciences.

This effort grew out of interest in nudges in the Obama-era Office of Science and Technology Policy in the White House, where Sunstein headed the Office of Information and Regulatory Affairs. The OIRA initiatives in turn were inspired by a British government organization called the Behavioral Insights Team, which pioneered the use of nudge interventions in government. OIRA seeded some nudge efforts and then, as often occurs when the White House is interested in an initiative but lacks people to staff it themselves, farmed it out to GSA in 2014. To head the office, GSA found an unusual choice, Kelly Bidwell, who had a master's in international development from Columbia with an interest in evaluation for anti-poverty programs, and had actually spent most of her career in Africa working on randomized controlled trials and other rigorous program evaluation methods; a friend at the NGO she was working for told her about the opportunity.

The office has now completed 70 or so projects and works with about half of the cabinet agencies, in the areas of improving public health outcomes, reducing the cost of government operations, increasing educational opportunity, and retirement security. Many of their projects come from repeat agency customers who have been pleased with work the office does, although they spread the word by doing events around each report and having an annual day where they present highlights of their work. Work with agencies doing the experiments is covered partly by the office's budget and partly on a reimbursable basis. The staff is half civil servants and half academics who typically come into the government for one year under a governmentwide program allowing temporary details for scholars.

All the Office of Evaluation Sciences interventions are nudges -- there are no tests of the impacts of requiring people to make a certain decision. The tests follow a standard playbook like that used in the social science experiments academics do. The starting point is an individual behavior the agency is trying to influence.

In the case discussed in Kamenky's column, it was doctors overprescribing antipsychotic drugs for elderly patients. Then the individuals whose behavior is trying to change are randomly divided into two groups, in this case two groups of those previously identified as over-prescribers. One group, the experimental group, was then sent a letter warning against over-prescription, which included information of how their prescription rates compared with those of other doctors in their state. That additional information constituted the intervention whose effectiveness the agency is trying to test.

The other group, the control group, received the same warning but without the information that they personally were high prescribers. That intervention -- sending the information that their own prescription rates were higher than average -- reduced prescriptions by 11%, saving money and improving patient safety. And the intervention was extremely low-cost compared to other possible interventions such as a physician education campaign or auditing physician prescriptions. And it did not mandate a changed behavior.

The Office of Evaluation Sciences has a fairly standard modus operandi for deciding whether to become engaged in a project. Agencies come to the office with an idea of something for which they would like a study, explaining the change in individual behavior they seek. GSA then interacts with the agency to see if the idea is appropriate for their help. The requirements for a feasible project include:

  1. that the government effort where an intervention is being suggested involves an individual interacting with the agency through a communications or signup point;
  2. that the outcome the agency cares about depends in part on the user's own actions (e.g. choosing among alternative insurance plans);
  3. that overall information on the outcome in question for the whole population exists in agency administrative data (if it doesn't, a project becomes much more expensive because such data needs to be generated to be tested in the project, and because Paperwork Reduction Act approvals are required for new data collection); and
  4. that it is possible randomly to assign participants to an experimental or control group.

The office conducts the experiments itself. About half of projects initially proposed end up being implemented. As part of preparing a project, agency staff look at relevant literature to see what is known about possible interventions

One of the most intriguing and unusual features of the Office of Evaluation Sciences is its decision, dating from the founding of the office in 2014, to release the results of all the experiments it runs, even those where the intervention didn't work to improve people's behaviors, which is the case for probably over half of GSA's interventions. (Similar numbers seem to apply to studies done by scholars, though results are often not disclosed.) 

If this percentage of interventions that don't work, seems high, compare it with the many far, far more expensive interventions (say, to improve the school performance of disadvantaged children) that don't work either. The office seems to follow the Silicon Valley principle to try fast and fail fast.

Disclosing is a good practice, both because it allows others to learn from unsuccessful interventions for future ones and also because it does not paint too rosy a picture of success. But this is something government is often disinclined to do, preferring to keep unsuccessful efforts private if they can. When I mentioned this to my wife, she correctly said, "This is not just government, people are like this in general." But in government the disincentives for reporting failures are greater because of the risk of media and congressional criticism.

Kamensky's column on the office highlighted only interventions that succeeded in changing behavior, not mentioning failures. The office bravely decided at the beginning publicly to release the findings of all their projects, including those where the intervention being tested had no effect on behavior. (The office calls these "null findings" rather than the more-stigmatizing word "failures," a piece of academic social science jargon referring to interventions that produce no results. The term comes from the commonly used phrase in academia that the "null hypothesis" -- the hypothesis that an intervention will have no effects -- cannot be rejected.) "We definitely don't view nulls as failures but part of the learning and testing process," Bidwell said.

There has been a lot of discussion recently in academic social science about the fact that it is difficult to get papers published with negative findings, and that scholars often must run an experiment several times before getting a positive result, which is then published. This means the world knows less about "null findings" than about positive ones, which biases our views about how successful interventions are and has often led to situations where scholarly findings, including some very famous ones, cannot be "replicated" -- that they don't work when tried again.

Recently, more scholars have been disclosing null findings, and it has been easier to get papers with null findings published. (My Kennedy School colleague Todd Rogers, who works in this area, sent me copies of two of his published papers with null findings.)

Having said this, there are virtually no academic researchers who go as far as GSA in publishing all findings. GSA is at the cutting edge here -- Elizabeth Linos, a young scholar at the Goldman School of Public Policy at Berkeley who works on nudges, told me what they were doing is "fantastic." Likewise, because of concern about scholars changing the tests they do with their data when the original tests they try don't work until they get tests that work, there has been a growing movement in social science to have scholars "register" the experimental protocols they will use and the tests they plan to run on their data before doing their experiments. A few academic journals are now requiring this. GSA is again at the cutting edge on this, starting two years ago to "pre-register" projects -- that is to say, to publicly disclose what the intervention is and what data analysis will be used to see if the intervention worked.

The passage of the Evidence Act (formally the Foundations for Evidence-Based Policymaking Act) earlier this year is increasing the interest in what the office does. That law requires agencies to report to OMB about their efforts to gather and use evaluations and evidence about program performance. I will confess I am not especially enthusiastic about this often-used way to influence agency management, which promotes seeing management improvement as a compliance drill. And when the law was first passed, I thought of it as a way to revive the kinds of program evaluations the government often did 50 years ago, which were expensive and took five years before actually revealing any results.

Bidwell said, however, that attitudes as to what constitutes evidence have changed significantly, so that they now include quick and cheap ways to gather evidence of the sort that the Office of Evaluation Sciences does. The federal government is behind not only the UK but also some U.S. local governments that have nudge units, as well as academia, where lots of nudge experiments are conducted. Especially if the law encourages agencies to try the kinds of nudge interventions in which the Office of Evaluation Sciences specializes, it will be good news for good government.

Posted by Steve Kelman on Sep 11, 2019 at 1:49 PM


Featured

  • Defense
    The U.S. Army Corps of Engineers and the National Geospatial-Intelligence Agency (NGA) reveal concept renderings for the Next NGA West (N2W) campus from the design-build team McCarthy HITT winning proposal. The entirety of the campus is anticipated to be operational in 2025.

    How NGA is tackling interoperability challenges

    Mark Munsell, the National Geospatial-Intelligence Agency’s CTO, talks about talent shortages and how the agency is working to get more unclassified data.

  • Veterans Affairs
    Veterans Affairs CIO Jim Gfrerer speaks at an Oct. 10 FCW event (Photo credit: Troy K. Schneider)

    VA's pivot to agile

    With 10 months on the job, Veterans Affairs CIO Jim Gfrerer is pushing his organization toward a culture of constant delivery.

Stay Connected

FCW INSIDER

Sign up for our newsletter.

I agree to this site's Privacy Policy.