Friday, May 13, 2011

How To Evaluate A Training Program

Since the dawn of time, when early trainers were training their clan members how to improve their hunting and gathering skills, training organizations have struggled with how to measure the impact of their training programs.

Since then, thanks to pioneers on the training field like Donald Kirkpatrick and Jack Phillips, we now understand that training evaluation needs to be more than administering "smile sheets" ("did you like the food?") after a program.

Ask any training professional at a training conference how to measure training, and most of them will be able to recite the industry standard "Kirkpatrick Four Step Model". Kirkpatrick taught us that we should measure:

1. Reaction. These are the old "smile sheet" questions, i.e., "on a scale of 1-10, please rate the instructors, materials, food, pre-work, etc….". These kind of questions are still important – they are a measure of client satisfaction. Let's face it, a hot, noisy room can kill even the best training program. Instructors love getting these too, because most instructors have huge egos and want to read about how wonderful they are, and a few of them even use them to make improvements.

2. Learning. We need to measure if the participants learned anything. Learning could be knowledge or skills.

3. Transfer. Did they participants actually behave differently back on the job?

4. Results. Did the training have an impact on business results, i.e., improved customer satisfaction, increased revenues, reduced cots, etc…

However…… if you ask these same trainers how much of this stuff they have actually implemented in their organizations, that's when someone shifts the discussion to whether leaders are born or made.

Why not? Why the gap between knowing and doing? I believe there are two main reasons why organizations are not implementing this model:

1. It's hard to. It takes a lot of effort, time, and resources to administer all this stuff. You'd pretty much need a full time person or department. Most training teams, when faced with the choice of using resources to measure training vs. actually doing training would rather do.

2. You can get away with not doing it. Most executives don't ask for it, and if they do, it's probably too late to do anything about it anyway. Training evaluation is one of those things that help you win training awards, but it's not top of mind for most line managers.

I happen to believe it's important to measure the impact of training….. but not that important. That is, let's do it right, but not overdo it by spending a lot of money, time, and resources.

Here's a relatively simple, yet effective system that I've seen work and that more and more companies are using:

1. First of all, trainers should not design or administer their own courses, in order to remove any conscience or unconscious bias. However, they should be doing ongoing "plus deltas" at the end of each course, perhaps even every day, especially for new courses.

2. All courses should be evaluated using a common platform, centrally administered (one person), with some questions being standard, so comparisons can be made. Either buy a software program, or create your own, using a tool like Survey Monkey or Zoomerang. Some Learning Management Systems have measurement build into their platforms as well.

3. Questions should address all four levels of evaluation: level 1: participant reaction; level 2: learning; level 3: transfer; level 4: business results (Kirkpatrick model)

4. For level 1, use the basic same questions for all courses (instructors, food, conference center, materials, etc…).

5. For levels 2 and 3, ask participants to estimate their estimated increase in the knowledge or skill the course was designed to improve (i.e., 10%, 20%, etc..). I know, I know, it's not the same as a test, but for higher level skills like "ability to see the big picture", it's a reasonable measure.

6. For level 4, identify a list of key business results (i.e., speed to market, cost reduction, client satisfaction, etc…). For each course, pick out the business results the course was designed to address and ask participants to project how much attending the course will enable them to influence each business results.

7. Administer these questions to participants immediately after the course (online).

8. Administer the same survey to participants 90 days later, but instead of asking them to project their learning, behavior change and business result impact, ask them to look back and estimate actual results.

9. Administer the same 90 day survey to the participant's managers, asking them to assess the trainings impact on their employee.

Although this may sound complicated, it's really fairly simple to design and administer. With a good techie geek, you can use the data to produce some pretty slick training dashboard reports. With the level 4 questions, you can begin to show training's impact on business results, something that's historically been very hard to do, other than things like sales training. Of course, you would want to compare participant's and manager's estimates with actual business results. If the number don't match up, that's often an indication that training is not the problem, that there may be other factors coming into play.

Has anyone used a system like this, or something better? What do you think, is it worth the bother?
 
 
 

No comments: