Experiments to improve public-sector management

Steve Kelman proposes running experiments to test different approaches to public-sector management.

In a column I just wrote, I proposed organizing an experiment to see whether providing dramatically increased resources for managing service contracts would improve contract cost and performance. Take a group of contracts and increase the number of contract management bodies devoted to the contract, say by threefold. Then take a control group of other contracts and don't change how they are managed. Look two years later and see if the first group have evolved in a more positive way than the second.
 
The proposal I made in the column was an example of an approach to improving government performance that deserves far more use than it's getting. We should be doing more actual experiments with an experimental group of organizations that receives some treatment in the form of some management practice or approach, a control group that doesn't receive the treatment, and an evaluaion after time has passed comparing performance of the two groups. If the treatment works, we should consider applying it more broadly; if it doesn't, we can try a different treatment to see what happens.
 
Of course, any time a Social Security senior manager in Washington compares the performance of various Social Security offices along some dimension, and looks to see which offices are performing better than others and why, that manager is in effect running an experiment. There are also a number of examples of doing large, expensive formal experiments (usually conducted for the government by university-based researchers) to evaluate various social policies -- such as giving poor people housing vouchers or sending children to charter schools -- and see whether they work or not.
 
The kinds of experiments I have in mind are somewhere in-between the informal comparisons that managers do using performance data available across organizational or team units on the one hand, and expensive experiments that often take years and cost millions of dollars, run by scholars. The former have sample sizes that may be too small and too many confounding factors that may explain observed differences across units to allow drawing conclusions with any degree of comfort. The latter apply rigorous standards of sample size, random selection, and so forth, but are very expensive and time-consuming.

We need something in the middle -- probably not rigorous enough to merit publication in an academic journal, but paying some attention to principles of experimental design to give the findings greater believability. Imagine experiments, say, of the impact of various recruitment techniques on the yield rate for highly sought-after candidates for federal jobs, or of the impact of changing some feature of an agency website on public usage or favorability.  Companies do experiments such as these regarding their products or services all the time, and we should apply this idea to government.
 
If we are going to improve the management of the public sector, we are going to need more efforts such as these. There is a constituency for formal program evaluations for government, especially in the Office of Management and Budget and the policy shops that exist in some cabinet agencies. We need a constituency of management experts and senior government executives who care about improving agency performance to start this inside government organizations. If there is interest in the government, I bet I could rustle up some academics who would like to help.