Talk about an easy way to improve your organization’s performance!
During the last decade, Denmark's Aarhus University emerged from nowhere to become perhaps the leading location in Europe for research about public-sector management. (Denmark is a small country that lately seems mainly to be known in the U.S. as Bernie Sanders' model society; perhaps their large public sector stimulates interest in research in this area.)
In particular, Aarhus has been one of the pioneers in applying experimental methods to public management research. This involves randomly dividing a sample, for instance government employees, into two groups that then are exposed to different interventions that are suggested to have an impact on, say, employee performance. With selection into one of the groups being random, differences in outcomes can be concluded to relate to differences in the "treatments" experimental subjects have received.
In a recent paper in the journal Public Administration Review, Mogens Jin Pedersen, an Aarhus PhD, reports on an experiment whose results should be of interest to every federal manager. His topic is "public service motivation," a motivation present in many (though obviously not all) civil servants to serve society through their work. This is a hot topic in public administration research, since it represents probably the only advantage that government organizations have over private ones in motivating staff to perform well.
As Pedersen points out, most existing research on this topic discusses how to cultivate PSM, whether through initial hiring decisions (I discussed some research from Germany on this topic in a blog post last year) or through encouragement after the employee starts a job. Pedersen goes a different path -- given some existing level of public service motivation in your employees, how can you activate it so it influences employee behavior.
Pedersen tests a treatment to activate public service motivation that is so low-cost as to be effectively costless.
In the experiment, he presents all subjects (law students at Aarhus) with the following question at the end of a survey on several different topics: "In the near future, you will be invited to participate in a survey about your daily life. How many minutes are you at most willing to spend on completing this survey."
All respondents were asked to give a number ("will not participate" was an acceptable response, coded as zero). The control group received only this instruction. One experimental group received the additional information: "Your participation will help ensure the development or society and thus serves the public interest." The other experimental group received different additional information:"Your participation will help ensure that citizens in need are aided in the best possible way."
The experimenter then compared the average amount of time the control group and the two treatment groups said they would be willing to spend filling out the survey.
This one-sentence difference between the control and experimental groups produced a difference in the time people were willing to spend on the survey. For the control group, the average was 7 minutes 35 seconds. For the treatment groups -- there was no notable difference between the two -- the average jumped to 8 minutes 29 seconds! An extremely low-cost activation of respondents' existing public service motivation made a genuine difference in the work effort they were willing to devote to a task.
Pedersen notes that what was measured was the stated willingness to spend time; respondents were not measured for how much time they might actually spend filling out the survey. It is very likely that intentions would exaggerate the actual time, but equally possible that there is no difference in the degree of exaggeration between the two groups, so that the differences between control and treatment groups might still hold. One would love to expand this experiment to include actual completion of a survey by respondents so that actual work levels could be compared.
Although not totally definitive, these results are dramatic enough -- especially considering the near-costlessness of the intervention -- to be worthy of being tried by federal managers. One would want to test, in different organizations, the relative impact of different messages, along with getting more information about how often, or over what time period, a similar low-cost appeal can be given without people becoming habituated to the treatment and the effect disappearing.
But managers would be stupid not to give the approach outlined in this experiment a try.
Posted by Steve Kelman on Feb 08, 2016 at 3:27 PM