Why training programs often fail

In most contracts, integrators are measured on the efficiency of the training process, rather than on the increased competency of end users.

Recently, a telling photo made the rounds of our community. It showed a successful looking executive using a BlackBerry to keep on top of business. He was standing next to a beautiful 80-foot yacht named "Change Order," and behind the yacht was a dinghy, appropriately named "Original Contract."

If nothing else, this photo reflects what's wrong with the psyche in the world of federal acquisition. We just don't seem to be able to get what we need because we insist on defining what we think we want. Time and time again experience has proven that detailed specifications are at best correct only for a relatively short time and at worst way off the mark.

Nowhere is this more evident than when it comes to user training during the deployment of major systems. Training requirements in solicitations usually focus on classroom- or computer-based training education rather than on outcomes that meet end-user goals. The attitude is, "Training? Who cares about training?" Training is just a means to an end, which is how people perform once a system is deployed. Folks in that business call it "human performance." In our business, we conduct acceptance testing by having the developers who know all the ins and outs of the system confirm the adequacy of their own work or use third-party testers who do pretty much the same thing.

Actually, training, in its classic context, probably is the least important part of any major deployment. Much more important is whether the user community accepts, understands and uses the new solution effectively.

We all acknowledge that training efforts usually fail for several reasons: 1) the students come unprepared to be trained, 2) they are taken out of their already pressure-filled jobs so they don't want to be trained and 3) because it will be six months to a year before they see the final new system (with all of the changes that happened after the training) in operation.

Generally, the solution is to have the systems integrator that is developing the solution teach the customer how to use it. Foxes guarding the hen house? In most cases, the integrator would seem to be motivated to have less-effective training in order to allow for more billable labor hours, right? Unfortunately, the measures of success for the training usually have nothing to do with the success of the deployment, but with attendance at classes or grades on exercises.

User acceptance, competency and efficiency are the real issues, but those are hard to measure. So the easy thing is to put in place help desks that respond to ongoing user problems using a system, thereby adding another layer of cost to an already inefficient process (again, the integrator does not seem motivated for effective training, thereby allowing for more help desk hours).

As in most other areas of performance-based acquisitions, you need clearly defined goals, supported by metrics that address processes focused on the mission, change management driven by community of interest practitioners, user-connected, context-sensitive self-help aids built into applications and evaluation mechanisms that assess how well the new system is supporting the practitioners.

As usual, the solution to this dilemma rests with relevant, well-defined outcomes rather than statements of work that just define how things are done today with the outcome that we make the same mistakes we always have, only more quickly.

A successful "Human Performance" program that is tied to the achievement of business goals should have specific outcomes like shortened time to competency, reduced cost of learning, self-paced learning opportunities and facilitated change management.

We should be measuring the results of training and not the training process. We should redefine training to include state-of-the-art "just in time software" that sees what a less-than-proficient user is doing and assists that user in developing more proficiency. We should be using the best technology that is available and not be stuck with training methods that time and experience has shown don't work when they are needed the most.

Private-sector companies and federal agencies always look to training as a cost element that can be reduced. Frankly, that is a good idea. Training should not be a large cost component. The key element should be measurable results of the deployment that focus on user competency and reduced costs of learning irrespective of the costs of training. If we can just focus on the "Original Contract" we might just avoid the need for that super expensive "Change Order."

Robert J. Guerra is a partner at Guerra Kiviat and Thomas Hogan is president at Hogan Government Solutions.

NEXT STORY: HSPD-12 checks prompt lawsuit