MET:Evaluation in Instructional Design - Kirkpatrick's 4 Level Model

From UBC Wiki

This page was created by Cathy Jung (2009) - solo. Please do not edit until after March 1, 2009.


Evaluation is the systematic determination of merit, worth, and significance of something or someone using criteria against a set of standards. Evaluation is an integral component in the instructional design of training programs. Data gathered during the evaluation process provides instructional designers, trainers and organizations with information on which training programs to continue, discontinue, modify or improve. Kirkpatrick's four-level model is considered an industry standard among training communities for the evaluation of training and learning.

Purpose of Evaluation

File:Donald L. Kirkpartrick.jpg
Donald L.Kirkpatrick

Donald L. Kirkpatrick first published his ideas on evaluation in 1959 in a series of articles aimed at stimulating Training Directors to increase their efforts to evaluate training programs.

Kirkpatrick identifies three reasons to evaluate:

  1. to justify the existence and budget of the training department by showing how it contributes to the organizational objectives and goals,
  2. to decide whether to continue or discontinue training programs, and
  3. to gain information on how to improve future training sessions.


Four Levels of Evaluation

Kirkpatrick’s four-level model of evaluation consists of Level 1: Reaction, Level 2: Learning, Level 3: Behaviour and Level 4: Results.

Level 1: Reaction

File:Four Levels of Evaluation.gif
Four Levels of Evaluation

This level measures the learner’s reaction or perception to the course. Potential methodologies include program evaluation sheets, interviews, questionnaires and participant comments throughout the program.

The evaluation sheets are often referred to as Smile Sheets or Happy Sheets. The impact on instructional design is that the evaluation forms should be designed allowing for results to be easily tabulated and utilized to impact future training programs.

Level 2: Learning

This level measures the learner’s knowledge. Potential methodologies include pre and post testing, observations, interviews and self-assessment.

A pre-test given prior to the program and a post-test at the end of the program can be used to differentiate between what the learners already knew prior to training and what they learned during the training program. Kirkpatrick cautions that the tests must accurately cover the material presented or it will not be a valid measurement of the effectivess of the program in terms of learning. The impact on instructional design is on the training program content and pre/post test construction.

Level 3: Behaviour

Level 3 involves the extent to which learners implement or transfer what they learned. This level differentiates between knowing the principles and techniques and using them on the job. Potential methodologies include formal testing or informal observation. This level of evaluation takes place post-training when the learners have returned to their jobs and is used to determine whether the skills are being used and how well. It typically involves contact with the learner and someone closely involved with the learner, such as the learners’s supervisor.

Level 4: Results

This level involves examining the impact on the organization. Kirkpatrick notes that objectives of training programs can be stated in terms of desired results, such as reduced costs, higher quality, increased production, and decreased employee turnover rates.


Summary of Kirkpatrick's Four Levels of Evaluation

The following table summarizes Kirkpatrick’s Four Levels of Evaluation at a glance.

Level Measures Key Question Addressed Methodologies or Indicators
1: Reaction Satisfaction What was the participants' reaction to the program?
  • program evaluation sheets often called Smile Sheets or Happy Sheets
  • interviews
  • questionnaires
  • participant comments
2: Learning Knowledge What did the participants learn?
  • pre/post testing
  • observations
  • interviews
  • self-assessment
3: Behaviours Transfer of Learning Did the participants' learning affect their behaviour?
  • testing
  • observation
4: Results Transfer or Impact Did participants' behaviour changes affect the organization?

Indicators can include:

  • reduced costs
  • increased productivity
  • decreased employee turnover


Limitations of Kirkpatrick's Model

Some of the limitations of Kirkpatrick’s model include:

  • Individual, Organizational and Contextual Influences. This model does not consider the wide range of individual, organizational, and training design and delivery factors that can influence training effectiveness before, during, or after training. The training program itself is presumed to be solely responsible for any outcomes that may or may not ensue. This model isolates training efforts from the systems, context, and culture in which the learner operates.
  • Incremental Importance of Information. This model assumes that each level of evaluation provides data that is more informative than the last, which may lead to a perception that Level 4 will provide the most useful information about the effectiveness of the training program.
  • Low Incorporation of Level 3 and Level 4 Evaluation. The results of an American Society of Training and Development (ASTD) survey presented in 2008 indicates that 91% of organizations evaluate at Level 1, 54% at Level 2, 23% at Level 3 and 8% at Level 4. Reasons for the low level of adoption of Level 3 and Level 4 may stem from the fact that the measurement of behaviour change is less easy to quantify and interpret than reactions and learning and the results across an entire organization as opposed to individually becomes more challenging to assess.


External Links

Kirkpatrick, D. L. (2006). Seven keys to unlock the four levels of evaluation. Performance Improvement, 45(7) 5-8. http://www.scribd.com/doc/7841600/Donald-Kirkpatrick-7-Keys-to-Unlock-4-Levels-of-Evaluation

Kirkpatrick, D. L. & James D. Kirkpatrick. (2006). Evaluating training programs: The four levels, 3rd ed. San Franciso: Berrett-Koehler Publishers. http://www.amazon.com/Evaluating-Training-Programs-Four-Levels/dp/1576753484


References

Bates, Reid. (2004) A critical analysis of evaluation practice: The kirkpatrick model and the practice of beneficence. Evaluation and Program Planning, 27, 341-347.

Chapman, Alan. (2007) Kirkpatrick’s Learning and Training Evaluation Model. Retrieved Feb 7, 2009 from http://www.businessballs.com/kirkpatricklearningevaluationmodel.htm

Clark, Donald. (2007). Instructional system development – evaluation phase. Retrieved Feb 7, 2009 from http://www.skagitwatershed.org/~donclark/hrd/sat6.html

Crone, Glen. (2005). Evaluation of executive training. Treasury Board of Canada Secretariat Retrieved Feb 20, 2009 from http://www.tbs-sct.gc.ca/eval/pubs/eet-efcs/eet-efcs_e.asp

Dick, Walter. (2002). Chapter 11 Evaluation in instructional design: The impact of kirkpatrick’s four-level model. In Robert Reiser & John Dempsey (Eds.), Trends and issues in instructional design and technology (pp. 145-153). Prentice Hall.

Kaufman, R., Keller, J. & Watkins, R. (1995). What works and what doesn’t: Evaluation Beyond Kirkpatrick. Performance and Instruction, 35(2), 8-12.

Kirkpatrick, D. L. (1996). Techniques for Evaluating training programs. In Donald P. Ely, & Tjeed Plomp (Eds). Classic writings on instructional technology (pp.119-141). Libraries Unlimited.

Sloman, Martyn. (2008). The value of learning. ASTD 2008 International Conference and Exposition. Retrieved on Feb 15, 2009 from http://www.astd2008.org/PDF/Speaker%20Handouts/ice08%20handout%20M120.pdf

Wikipedia. Evaluation. Retrieved on Feb 27, 2009 from http://en.wikipedia.org/wiki/Evaluation

Images

Donald L. Kirkpatrick [image file]. Retrieved Feb 27, 2009 from http://www.amanet.org/editorial/webcast/2007/effective-training.htm

Four Levels of Evaluation [image file]. Retrieved Feb 27, 2009 from http://c2workshop.typepad.com/