the lectern banner

By Steve Kelman

Blog archive

A better path to performance management

Performance meter

When people think about how performance measurement might improve a government organization's performance, most immediately think about how metrics can be used as a carrot or a stick to get people to work harder or to care more about what they do. Many argue that organizations should either gain or lose budget depending on their performance. Others argue that "pay for performance" should reward individual good performers and punish, or fire, bad ones.

While these common reactions are not necessarily wrong, applying them is more complicated than the kneejerk reactions suggest. Might bad performance suggest that an organization is underfunded and needs more budget? If an organization is doing very well with its current budget, does it necessarily need more money? And individual-level pay for performance, aside from the political problems with union resistance, can reduce necessary cooperation among employees in a workplace, as well as (look at the Veterans Affairs situation) increase incentives for cheating.

By contrast, there is another way to use performance measures to improve performance that does not create these problems. This involves using the numbers to learn -- to diagnose what your problems are, then try a new way of doing business, then measure again to see if you've gotten better. This has the advantages of having few if any negative side consequences, and of not creating political resistance. In fact, motivated or creative people in organizations often find this approach challenging and fun.

Some years ago I read about a public school that was using standardized test results in this way. They got information divided up by class that showed how well the students in the class had performed on different parts of the math standardized test –- I'm not sure if this was the actual example, but to make it simple, let's say how well the students had done on long division problems and fractions problems in the test. If the teacher saw their students were doing well in division but poorly in fractions, they knew that teaching methods for, or attention to, fractions needed to be improved. The results of any changed methods could then be measured the next time students in the class took the test.

Recently I was doing volunteer work in a public K-12 school south of Boston, and found myself weeding the playground next to the school's principal for elementary school students. I asked him whether any teachers in his school used the test results that way. Of course, he responded. Was that common in Massachusetts, I asked? Yes, he responded, saying he guessed that teachers in most schools in the state were using test results that way.

A big reason for this, he added, was that the Massachusetts Department of Elementary and Secondary Education had established (initially a decade ago, but much-changed and developed since then) an online system, called EdWin, to provide Massachusetts schools with information they need to use standardized test scores to help the school learn how to do better.

The version available to schools divides up information on performance on different parts of the various standardized test by school (compared with other schools in general, or demographically comparable schools in particular), by classroom, and even by individual student. More-recent enhancements of the system make the data available more quickly.

The latest version also ties in public school data with national university databases so a school can learn what percentage of their graduates who go to college end up graduating. The system is now working on linking school data with data from pre-school programs so schools can know what kinds of pre-school programs the students have had (this is designed to help schools tailor instruction to new pupils, but could eventually be used to link pre-school programs with elementary school performance). A publicly available version of the data allows parents to look at scores at the school level, but not at the individual classroom or student.

I asked Wally McKenzie, a former private industry tech employee who runs EdWin, if they have any estimates of what percentage of schools use these data for learning purposes. He estimated about 75 percent, and backed up his estimate by noting that in August and early September, when the school year is starting and schools working on teaching strategies for the year, school page views go up eightfold. McKenzie also noted that his organization also trains schools in how to use the data, but averred they could do a better job in this regard.

This is very exciting, in my view. It is also essentially unknown to the public – I consider myself quite knowledgeable about performance measurement in government, and I was surprised to learn that any but a handful of schools use the test results as a learning tool to improve their own performance. It would be nice for this message to get out, so this approach to using performance measurement in government spreads.

Posted by Steve Kelman on Sep 05, 2014 at 4:35 PM


Featured

Reader comments

Thu, Sep 11, 2014 Al

Seconding Jaime's point here: feedback is important. Also, the *intensity* of that feedback is also important. As a government worker, I won't lose my house if a particular venture fails. Some of my contractors may. There is no managerial technique which may replace that level of urgency. That being said, weak beer is better than no beer. I think the lowest hanging fruit for performance improvement is reducing the scope of what each agency does (through legislation) and refocusing resources on the things our citizens value most. That does not preclude anything that Prof. Kelman writes above, I just tend to focus on different things because I view the world through a very different prism.

Tue, Sep 9, 2014 KRL

The theory behind performance measurement is great. In fact, I cannot really argue with the concept. I don't like seeing poor performers creating problems for the team nor do I like seeing good performers punished because of bad apples. However, this concept is far from new as I have been involved in defining and participating in formal performance management type of experiences 3 times over the last 30 years in 3 different types of organizations - defense contractor, public/private research consortium and a university(does the term Total Quality ring a bell?). In my experience, the issues and outcomes have been the same all three times no matter the enthusiasm of the participants or the approach taken. The reasons... First, humans are required to map the process and develop the measurement criteria (and it is not always the experts in the field that are given this task but have very strong ideas about how much time some task "should" take). Second, humans have a tendency to determine the performance criteria wanted before such time as they have the process mapped or sufficient data to determine what is reasonable. Third, taking an average over a hundred activities does not mean that there are not situations that will be/should be outside of the average; this does not mean that a person is doing a bad job. Fourth, incomplete data is then cherry picked to be used as a performance hammer (e.g. "...this task shall be completed within 2 weeks...") thereby resulting in very low moral within the working environment (the VA scheduling scandal comes to mind...). Fifth, the concept of multiple tasks with competing priorities never seem to register with the performance reviewers who are only interested in how much time a task/project/program takes. One thing I remember from some early TQ training is the saying, "if you don't have time to do something right the first time, when will you have time to do it over?" Always liked that saying. In my humble experience, success is most likely to occur for both people and projects when you work with a team of good, honest, hardworking people who are dedicated to a common, defined goal with a reasonable schedule and adequate resources and an ability to communicate with each other. A good team will define both reasonable and adequate.

Mon, Sep 8, 2014 Jaime Gracia Washington, D.C.

Feedback mechanisms are vital, in addition to root cause analyses, when metrics are not met. Of course, effective benchmark studies need to be conducted such that performance measures and metrics are proper, realistic, and achievable. Further, leadership needs to properly resource these initiatives, and hold people accountable. It is the accountability function that is sorely lacking in government, as it is easier to play CYA, blame contractors for poor performance, or simply accept poor performance and substandard quality.

Mon, Sep 8, 2014 Owen Ambur Silver Spring, MD

It is good that organizations are beginning to use performance metrics appropriately, and it is fitting that educational institutions lead the way -- because metrics help us learn how to improve our performance. Indeed, without them, we may essentially be blind and doomed to re-live the mistakes of the past, with our fate left to the laws of competition and natural selection. Hopefully, now that agencies are required to render their performance reports in machine-readable format, they will become more capable of learning -- not only from success but also and especially from failure. The taxpayers should not be forced to pay over and over again for the very same mistakes simply because Uncle Sam is incapable of learning from them.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above