the lectern banner

By Steve Kelman

Blog archive

The larger message in social media metrics

Justin Herman

Justin Herman, new media manager at the GSA’s Center for Excellence in Digital Government, shown speaking at GSA's Social Media Week in February. (FCW photo by Frank Konkel)

FCW reporter Frank Konkel wrote an interesting article on FCW.com on efforts growing out of a working group inside the General Services Administration to develop performance measures for government social media sites.

I found the article fascinating from two perspectives. First, the metrics themselves look sensible. For example, they suggest tracking "conversions" (when people click through from the post to additional linked content), "loyalty" (when first-time visitors return), and "customer service" (timeliness in responding to requests). I am guessing many of these metrics grow out of private-sector practice in tracking social media effectiveness; and I say this as a kudos for learning from others, not a knock for lack of originality. There are too many home-grown approaches in government to issues that have perfectly good private-sector counterparts. I am also hoping that the data to track many of the metrics presented is generated either free or at very low cost.

These metrics also lend themselves to performance improvement – figuring out how to do a better job. Companies are already frequently running quick experiments for their webpages or social media offerings, randomly exposing visitors to different versions of a message, or positioning a message at different parts of a screen, in order to see, for example, which produces more click-throughs. Government needs to start doing these kinds of experiments (in a scientific sense – two different treatments to randomly chosen groups, to look to see whether the result is different) in a lot of areas, and experiments involving social media effectiveness is a good place to start.

But this article is significant even for people who are not involved at all in social media. It is a sign that performance measurement – in this year, the twentieth anniversary of the passage of the Government Performance and Results Act in 1993 – is now becoming taken for granted as a way to do business in government. I recently interviewed a subcabinet agency head, and he noted that he thought his agency had now gotten to the point where briefings for new leaders would include the organization’s major performance measures, the targets for performance improvement in those measures, and the historical performance over the last few years. This is a revolution in government, and a good one. Performance measurement has progressed during one Republican presidential administration and two Democratic ones, suggesting that management reform really benefits from bipartisan support.

As we approach sequestration, it is a fair question to ask whether agencies can afford to spend any of their scarce resources on measuring their performance, rather than just performing. My answer would be that in tight budget times, performance measurement is more necessary than ever, because we have to try to get better at what we do not just by throwing additional dollars around, but by improving efficiency and effectiveness. By helping organizations learn how to do a better job, by focusing them on the most-important activities, and by motivating people to try harder, performance measurement is a powerful tool for lean times.

Posted on Feb 22, 2013 at 12:09 PM


Who's Fed 100-worthy?

Nominations are now open for the 2015 Federal 100 awards. Get the details and submit your picks!

Featured

Reader comments

Sun, Feb 24, 2013 Bruce Waltuck

Great post, Steve. I appreciate your noting the importance of using performance data to drive ongoing process improvement. I am not sure however, of the extent to which government orgs are doing that. The predominant approach of management in recent years seems to me, to be about targets/goals, and measurement of performance against those goals. Without a specific emphasis on process/system improvement, we tend to get sub-optimal results. Deming and others noted this- managers fudging numbers, or just acting to meet the goal (not improve process/result over time). One other point here, about the quality of the measures, and the collected data. Are there currently guidelines or criteria on defining/selecting performance measures, or on insuring that the data is consistently collected and analyzed? I've been teaching this to gov orgs for about 15 years, and I've sadly seen some real horror stories, especially on proper data collection. One agency at USDOL had half the country defining a key performance indicator one way, but the rest defined it in a totally different way. The result was that program results in the West appeared to be only 1/10th of what they were in fact. I personally informed the head of the program at DOL. Despite the Assistant Secretary's criticism of the program results (wrongly), nothing was done to fix the data issue. The fear of calling out the powerful regional directors who had the error, was too great!

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above