the lectern banner

By Steve Kelman

Blog archive

'Data-driven reviews' and agency performance

Shutterstock image (Macrovector): business management discussion brainstorming session.

Last week the Government Accountability Office issued its second report (the first was in 2013) on implementation of the Government Performance and Results Modernization Act of 2010. Given my ongoing interest in this topic, I checked it out, only to find there wasn't that much new in it, mainly summarizing earlier reports. But one of those cited reports, from this past July (which came out while I was either in or just getting out of the hospital with my cancer treatment), turned out to be filled with useful and surprising information.

That report is titled, in great GAO-ese, "Agencies Report Positive Effects of Data-Driven Reviews on Performance but Some Should Strengthen Practices." A search of the FCW.com website suggested FCW did not cover this report last July, so it's probably not too late to write about it.

This report documents what I would contend is the single most significant change in the management of the federal government in quite a number of years: The spread of regularized meetings involving senior agency officials and career managers to examine using data, how the organization is doing in meeting its priority performance targets, and to deliberate about steps that can be taken when progress falls short. These kinds of meetings are an important staple of life in big companies, and are considered essential to driving corporate performance. But they have never really been a part of how most agencies are managed.

"Data-driven reviews" got their start with Commissioner William Bratton's COMPSTAT system for the New York City police in the early 1990s. They attracted significant attention from other police departments, and then from a growing number of cities, where they became known as "STAT" meetings. (My colleague Robert Behn published a book last year on common features of such city systems, The PerformanceStat Potential.)

Such meetings sporadically began in the federal government after passage of the Government Performance and Results Act (GPRA) in 1993, but in many agencies GPRA was a tick-the-box compliance exercise that produced reports for outside overseers to look at or, often, to just gather dust on a shelf.

The idea of data-driven "STAT"-style meetings got wind in its sails when the Obama administration announced what GAO calls a move "away from the passive collection and reporting of performance information to a model where performance information is actively used by agency officials to inform decision-making, which is more likely to lead to performance improvements." (Full disclosure: My wife, Shelley Metzenbaum, headed this effort in the first years of the administration.) Then Congress passed the GPRA Modernization Act, which called for data-driven meetings, quarterly or more often, for CFO Act agencies.

Now, a few years later, GAO finds that 20 of 23 agencies are actually holding such meetings, quarterly or more often. (The General Services Administration holds them once a week; the Department of Commerce, Department of Veterans Affairs and NASA monthly; and the Social Security Administration every other month.)

Amazingly, in four agencies, the meetings are led by the agency head; in another 10 the chief operation officer/deputy secretary takes the reins. Only three agencies' meetings are led by the performance improvement officer, a somewhat lower-ranking career official. This represents a whole new level of senior agency leader involvement in disciplined, focused, regularized discussion of performance improvement.

Since earlier GAO surveys examining what factors promote use of performance information at lower levels of the system found that top-leader support was one of the best predictors, this new senior leader involvement bodes well for using performance information at lower levels of the government.

GAO also attended reviews at several agencies, and interviewed performance improvement officers about the process and results. (Since I am skeptical of the objectivity of these officials in judging how successful their efforts have been, I am not reporting their subjective self-assessments here, and instead focusing on answers to factual statements.)

What happens at these meetings? They almost always go through the agency's priority goals, often with a red/yellow/green traffic-light rating of progress towards meeting each performance target. Nineteen of 22 agency PIOs reported that meetings are always used to identify goals at risk, and to have goal leaders (the official formerly in charge of working to meet each goal) explain why the goal is at risk, then discuss with those present possible strategies for improvement.

A somewhat smaller set of 12 agencies reported always identifying and agreeing on specific follow-up actions to be taken after the meeting, and nine said they check in between meetings on the status of follow-ups. And 19 of 22 agencies identified getting accurate and timely data as a challenge.

There is a significant overlap between these meetings and the movements for increased use of data analytics and for "evidence-based government." Sometimes data analytics underlie conclusions about what approaches will work best to drive a performance improvement.

"Evidence-based government" is often discussed in a legislative or OMB context with regard to choices about what programs to fund or not to fund, data-driven meetings often assume the underlying program (say, for example, the desirability of increasing production of renewable energy on federal lands the Interior Department manages, which has been one of the Department's priority goals) and use evidence to choose among alternative ways to implement the goal. All of these efforts fit into a broader attempt to improve how well government delivers results.

If somebody had asked me five years ago what I thought the prospects were that almost all cabinet agencies would be holding regular meetings focused on performance targets and led by senior agency leadership, I might not have asked the person asking what they were smoking, but I certainly would have been skeptical. These GAO findings represent big progress in a modest period of time. The next challenge will now be to make sure this new practice survives the change of administration in 2017, and become as normal to government as it is to GE or Walmart.

Posted by Steve Kelman on Oct 06, 2015 at 4:27 AM


Cyber. Covered.

Government Cyber Insider tracks the technologies, policies, threats and emerging solutions that shape the cybersecurity landscape.

Featured

Reader comments

Tue, Oct 20, 2015 Dennis McDonald www.ddmcd.com

I'm going to check this out. I've had my doubts that quarterly reviews are sufficient but such reviews obviously need to be put into the context of what others are doing with data between such meetings.

Wed, Oct 7, 2015 Paul Brubaker

Love this as long as the targets are meaningful and relevant and the underlying data is reliable. I'm still troubled by the widespread lack of cost data even though it's been a requirement for over two decades. Agree that we need meaningful performance measures and the increased focus is pure goodness. It's come a long way - kudos for that - and has a long way to go to really drive management behavior.

Tue, Oct 6, 2015 Bruce Waltuck

Thanks for calling attention to this report, Steve. It brings up several points I believe are critical in the work of improving processes and outcomes. >common in the typical use of data to seek better outcomes is a presumption that, while ubiquitous, simply isn't true. That is, the presumption that output and to an extent outcome data, reveal the causal relationship between action and result. While true in simple and complicated technical systems, it is not true in the complex multi-faceted systems and challenges common to government. This is a key reason why "what worked for them may not work for us." The triumph of culture, context, and the inherent complexity of human endeavors. >this relates directly to the (in my view, over-)use of "evidence-based practice" in organizational change. EBP comes from random clinical drug trials, where the system is isolated to test for a single variable, and sample populations are controlled. Government, and other organizations, really can't do that in the realworld of messy and multiply-factored complex challenges. >the 2013 GAO report I think of in this case, is the report on agency program and policy assessment. I don't know if this is the one you are referring to. In that report, we read that a small percentage of agencies were conducting any significant program and policy assessments. A small fraction of thise were using the resulting data to seek and drive improvements. Of greatestsignificance to me in that repirt, was the fact that while Michael Quinn Patton was referenced, it was only in regards to his work on traditional assessment methods. Patton's book and work on "Developmental Evaluation" was nowhere to be seen. I tell people today that if there is just one book they should read, to understand the differences in the chalkenges they face, and how to act in response, it is Patton's "Developmental Evaluation." Take care, Steve!

Tue, Oct 6, 2015 Lynn Ann Casey

Hi Steve - What a great topic! We had the honor of working with one of the homeland security agencies. These are great executive forums that really provide new insights. The largest benefit is when executives across mission areas and divisions bring diverse perspectives and functional knowledge to solve an agency's biggest problems. We need more of this.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group