Steve Kelman shows how tests and metrics can be used to really improve, rather than simply reward and punish.
When people think about how performance measurement might improve a government organization's performance, most immediately think about how metrics can be used as a carrot or a stick to get people to work harder or to care more about what they do. Many argue that organizations should either gain or lose budget depending on their performance. Others argue that "pay for performance" should reward individual good performers and punish, or fire, bad ones.
While these common reactions are not necessarily wrong, applying them is more complicated than the kneejerk reactions suggest. Might bad performance suggest that an organization is underfunded and needs more budget? If an organization is doing very well with its current budget, does it necessarily need more money? And individual-level pay for performance, aside from the political problems with union resistance, can reduce necessary cooperation among employees in a workplace, as well as (look at the Veterans Affairs situation) increase incentives for cheating.
By contrast, there is another way to use performance measures to improve performance that does not create these problems. This involves using the numbers to learn -- to diagnose what your problems are, then try a new way of doing business, then measure again to see if you've gotten better. This has the advantages of having few if any negative side consequences, and of not creating political resistance. In fact, motivated or creative people in organizations often find this approach challenging and fun.
Some years ago I read about a public school that was using standardized test results in this way. They got information divided up by class that showed how well the students in the class had performed on different parts of the math standardized test –- I'm not sure if this was the actual example, but to make it simple, let's say how well the students had done on long division problems and fractions problems in the test. If the teacher saw their students were doing well in division but poorly in fractions, they knew that teaching methods for, or attention to, fractions needed to be improved. The results of any changed methods could then be measured the next time students in the class took the test.
Recently I was doing volunteer work in a public K-12 school south of Boston, and found myself weeding the playground next to the school's principal for elementary school students. I asked him whether any teachers in his school used the test results that way. Of course, he responded. Was that common in Massachusetts, I asked? Yes, he responded, saying he guessed that teachers in most schools in the state were using test results that way.
A big reason for this, he added, was that the Massachusetts Department of Elementary and Secondary Education had established (initially a decade ago, but much-changed and developed since then) an online system, called EdWin, to provide Massachusetts schools with information they need to use standardized test scores to help the school learn how to do better.
The version available to schools divides up information on performance on different parts of the various standardized test by school (compared with other schools in general, or demographically comparable schools in particular), by classroom, and even by individual student. More-recent enhancements of the system make the data available more quickly.
The latest version also ties in public school data with national university databases so a school can learn what percentage of their graduates who go to college end up graduating. The system is now working on linking school data with data from pre-school programs so schools can know what kinds of pre-school programs the students have had (this is designed to help schools tailor instruction to new pupils, but could eventually be used to link pre-school programs with elementary school performance). A publicly available version of the data allows parents to look at scores at the school level, but not at the individual classroom or student.
I asked Wally McKenzie, a former private industry tech employee who runs EdWin, if they have any estimates of what percentage of schools use these data for learning purposes. He estimated about 75 percent, and backed up his estimate by noting that in August and early September, when the school year is starting and schools working on teaching strategies for the year, school page views go up eightfold. McKenzie also noted that his organization also trains schools in how to use the data, but averred they could do a better job in this regard.
This is very exciting, in my view. It is also essentially unknown to the public – I consider myself quite knowledgeable about performance measurement in government, and I was surprised to learn that any but a handful of schools use the test results as a learning tool to improve their own performance. It would be nice for this message to get out, so this approach to using performance measurement in government spreads.
NEXT STORY: Program management: The contractor's role