the lectern banner

By Steve Kelman

Blog archive

How to measure training effectiveness

measurement tool

Training, always an appealing target for cuts in tight budget times, has also come under a further cloud in the wake of the government's conference hysteria, since there is sometimes an overlap between the two. Many believe that training is a crucial part of obtaining good employee performance, but others argue that a lot of training provided by government agencies or by vendors under contract to government, is rote and not engaging.

This cries out for performance measurement, to develop information about which training providers are doing a better or worse job, and to provide a series of natural experiments that could produce improvements in training offerings. If some ways of training on similar topics produce better results than others, we need to learn what distinguishes success from failure so we can spread good practice.

Having said that, we haven't gotten very far in developing performance measures for training programs, except for the student satisfaction ratings many of these courses use. (Those are not useless, but they are hardly dispositive.) We have talked somewhat about this at the Kennedy School with regard to our executive education programs. We have toyed with the idea of doing some before and after interviews of the direct supervisor of a sample of our participants, to see whether, and how, their job performance has improved after being exposed to our programs. I am embarrassed to say, however, that this effort has so far stayed at the discussion stage, and we are still limiting ourselves to the classic approach of asking students to rate the professors and the program.

Because of this, some efforts under way at the always magnificent Veterans Administration Acquisition Academy are noteworthy. (Full disclosure:  I am on the advisory board of the Academy, an unpaid position, and learned about this effort at an advisory board meeting.)  They have introduced a low-cost way of getting feedback from the supervisors of participants in their training course for VA program managers. In addition to surveys of participants about knowledge and knowledge utilization improvements (the results of which are almost certainly biased way upward), they have also done a simple survey of supervisors with questions such as whether there were "positive and noticeable changes in their staff members' project management behavior" and whether the supervisor believed that "cost, schedule, and/or performance improved as a result of training."  While these responses are also probably subject to exaggeration (especially since the supervisor might feel that if the program was a bust, it would reflect badly on them for sending the participant there), results are likely more accurate than participant self-reports, and the specific questions do deal with outcomes of the training.

Incidentally, for what it's worth, on average 71 percent of supervisors report that participant behavior improved and 74 percent said cost, schedule and performance had improved. Even if those figures are double the real numbers, getting improvements from even a third of training participants isn't bad. In addition, I hope that over time the Acquisition Academy will develop enough of a database on supervisor reports so it can analyze differences among programs and faculty members in producing performance improvements.

All in all, the VA Acquisition Academy has developed an important innovation for improving the value the government gets from training programs. There are a lot of program management courses out there. How about the Acquisition Academy trying to work with the Defense Acquisition University, the Chief Acquisition Officers Council and relevant others to get a standard set of simple questions to ask supervisors so we can develop more data on the effectiveness of training programs in this area?

Posted on Aug 08, 2013 at 12:34 PM


Who's Fed 100-worthy?

Nominations are now open for the 2015 Federal 100 awards. Get the details and submit your picks!

Featured

Reader comments

Tue, Sep 10, 2013 Shelley Kirkpatrick

Interesting article and discussion! Measuring Level 3, or application to the job, can be straightforward and extremely cost efficient if done with a short survey of the employee/student or supervisor. Some agencies see supervisor input as more objective and thus credible while others do not trust their supervisors, especially in technical areas, to provide accurate responses. I’ve also run up against other obstacles, such as fear of poor results (possibly due a lack of front end analysis, as Judy notes), union restrictions, and supervisors who are not in the same location as their employees and therefore cannot provide input. In my experience, deep evaluation skill and experience is needed to go beyond routine Level 3 surveys and find other ways to gather credible measures of training impact, such as existing measures or short phone surveys or focus groups.

Tue, Aug 27, 2013

That last comment is simply perfect! Pay attention congress, just pay attention...

Tue, Aug 20, 2013 The Trained Trainer

The first "measure" is to "measure" if the training was physical or virtual. If virtual, chances are it's not one the agency cares about. Next "measure" its nature with the sniff test. If it's simply a class that's meant to check a box to tell congress we're "training" our people - IOW if it's cyber security or HR related - then it's trash. So if you're required to take virtual cyber security training it's a given that it's trash. If it's virtual training for HR related things, then it's also trash and just meant to give congress the impression people were trained. And the agency feels like their off the hook. Now, if it’s training that physical training, in other words it was important enough to get you out of the office for, then it has a chance of being useful. However, if it’s training that’s meant to make you more efficient, due to congress’ ineffective funding model, it’s probably trash since the employee won’t be given the tools and resources needed to build the bridge once they get back to the office. So you’ll have a lot of ideas – most people call them common sense – but you won’t have the people, money, or other tools to get the job done. And you’ll have a congressman yelling that people are trained and not getting the job done. So they forget that a well-trained soldier with no rifle or rounds might not be an effective war fighter. Good to be them I say. So, be weary of taking training unless you sure you’ll be funded and resourced when you get back to the office. So that’s how you “measure” training – by making sure you’ll have a snow-ball’s chance in hell of using the training as trained later on. And if it’s virtual training, chances are it’s trash from the start and only there to check off a box for congress.

Tue, Aug 13, 2013 Jaime Gracia Washington, DC

Interesting topic Steve. I provide training services to VAAA, in addition to other private companies around town. In addition to DAU and FAI, the biggest problem I see is that federal government managers are required to be certified (FAC or DAWIA), with little thought into executing what they have learned. Due to budget issues, class sizes are at maximum capacity, with many students there to check off boxes. The most successful programs that I have experienced are those that hold the students accountable for executing some information they learned from the class via an “Action Plan” with their supervisors. Whether it is building better requirements, performing more effective market research, etc., students create tasks, due dates, and other items consistent with improving performance in that area. Many of the courses out there do not do this, so the student takes the binder back to their office, where it collects dust along with the others. There is little to no follow up, and the best practices lessons are quickly forgotten, replaced by the day-to-day grind of trying to get paper out the door with little thought into effectiveness or efficiency. As I always say to students, there is ample time to fix problems, but never enough time for the upfront work to do it right in the fist place.

Sat, Aug 10, 2013 Judy Wade Quantico, VA

Concur with Bruce's comment regarding Kirkpatrick's levels and Phillip's RoI as the classic models for evaluation of performance. Good training is based upon a front end task analysis that describes the performance standard or outcome the learner must achieve for each task. These objectives should be measureable. So the training effectiveness is related to the metric for performance. In my opinion, it is far better to collect quantitative data on performance to indicate training effectiveness than to use qualitative surveys of supervisors. Transfer of learning where the employee can perform to a standard is the bottom line. This is Level 3 evaluation in the Kirkpatrick model. The other question on training effectiveness is how long does it last. Can the employee still perform to standard 6 months or a year later?

Show All Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above