For metric-based management, Steve Kelman argues, it's important that the measures not be specified from on high.
As many blog readers may be aware, I have long been a fan (perhaps even a fanatic) when it comes to the use of performance measures to manage in government. Performance measures can be used effectively to improve an agency's performance -- by motivating staff (there is a huge literature showing that people's performance is better when they have a specific, ambitious goal); by focusing their efforts and steering them away from lower-priority areas; and by providing data that can be used in a feedback cycle to learn about shortcomings and see if changes made in response to problems have produced improvement.
Private-sector managers routinely use metrics in the daily management of their organizations' performance. Government would be working considerably better if we did this as well. I have regularly taught about using performance measurement in Harvard Kennedy School executive education programs, and, now that I am (as of two weeks ago) back to teaching after my illness, I have been lecturing on this topic again.
Nonetheless, even an enthusiast like myself realizes that poor performance metrics can have dysfunctional effects. The most common kind of problem involves what Michael Barber, who ran the aggressive performance measurement effort under British Prime Minister Tony Blair almost two decades ago, has called "hitting the target but missing the point." This refers to a situation where achieving a good score on the metric doesn't actually improve the organization's performance, because a good result on the measure is not associated with meeting the underlying goals the measure is supposed to further.
In such cases, the metric can cause everyone to run in the direction the measure directs, but that can just mean that the lemmings are marching together over a cliff. At the same time, metrics don't need to be perfect to generate improved performance -- managing to imperfect metrics is often better than managing to no metrics at all.
At the end of the three classes I recently taught on this subject, I invited the students to reflect on what they had learned about metrics -- how they can be a powerful tool for performance improvement but also how they can create misdirected effort. To make their thoughts more concrete, I then asked them to think of a flawed measure being used in their own organization, and to decide whether they thought it should be abandoned (and, if not abandoned, how it could be improved).
When I asked the question, I only had time in class to hear from three participants, but, within the constraints of this very small sample size, their responses were telling. All three named a performance measure that had they had been directed to track by somewhere higher in their chain of command. In all three cases, the participants stated they would like to scrap the metric entirely, but added (not surprisingly) that given the source of the metric, this was not possible.
Whenever I discuss performance measurement with government managers, I emphasize that they should see this not as something being done for "them" -- overseers, Congress, their internal chain of command -- but for "us," the working managers out there trying to deliver a product or service. Federal managers routinely complain, with reason, how few management tools they have available, compared with their brethren in the private sector, to influence the behavior of those working for them. Managing with performance measures is a potent tool that is already available. But any new way of doing business needs to run a gauntlet of suspicion on the part of hardened managers.
For this reason, it's important to allow the managers themselves to help set the metrics. Sometimes, obviously, the higher-ups will want agency managers to track a performance metric the managers themselves don't like. But whenever possible, I think that outsiders should hesitate about being too quick to impose a specific metric on an organization. We want managers to have as much ownership of their metrics as possible, and imposed measures take that sense of ownership away.
NEXT STORY: How USDA crowdsourced agricultural data