Elwood F. “Ed” Holton III, former director of the School of Human Resource Education & Workforce Development at Louisiana State University, recognized as early as 1996 that the Kirkpatrick Model of Training Assessment, although so widely adopted that it has become virtually an industry standard, had several serious drawbacks (Holton, 1996). To begin with, he noted that the Kirkpatrick model is essentially a taxonomy, or classification scheme, and he stated, “One shortcoming of taxonomies is that they do not fully identify all constructs underlying the phenomena of interest, thus making validation impossible” (Holton, 1996, p. 6).
In other words, there wasn’t sufficient research demonstrating that greater satisfaction with a program led to any greater learning, that greater learning led to any greater application, and so on. Thus, while the Kirkpatrick Model gives the appearance of being a hierarchy, it really just lists four different (and possibly even unrelated) outcomes of leadership training. Holton believed a more integrated model was necessary, one that had causal relations that could be validated via research.
The model Holton proposed, therefore, ends up looking relatively similar to the type of outcomes assessment most academic leaders are already familiar with.
- Learning outcomes for the training program are established: What should the participant know or be able to do after the program is over that he or she didn’t know or couldn’t do before the program began?
- The participant’s individual performance is then evaluated to determine what he or she is doing differently in light of the program’s learning outcomes: How did the employee’s performance change as a direct result of the program content?
- Results at the institutional level that flow from that individual employee’s performance are then measured to determine the effect of the training: What organizational benefit, if any, occurred as a result of the person’s participation in the training?
With the Holton Model, there is a direct connection from 1 to 2 to 3 because of the way components of the assessment are defined from the start. Moreover, by examining individual employees’ performance rather than participant behavior in the aggregate, other factors—such as personal motivation and willingness to learn, the willingness of the employee’s supervisor to support the implementation of new procedures, and the like—can be identified and, if necessary, controlled for. (On the importance of taking individual motivation into account when assessing training programs, see Laird, Naquin, & Holton, 2003, pp. 94–98.)
The difference is similar to the way many institutions have changed from counting enrollments in the aggregate to tracking the progress of individual students. Knowing that the enrollment of the first-year class one year was 10,000 and that the enrollment of the second-year class the following year was 9,000 doesn’t tell you that you had a 10 percent attrition rate. For all you know, you could have had a 50 percent attrition rate that happened to be counterbalanced in part by the arrival of 4,000 transfer students. Unless you know the path of individual students, you can’t determine whether you have a retention problem and, if so, how you might start to fix it.
In much the same way, when using the Kirkpatrick Model, if you know only that the participants as a group learned X, and then Y occurred, you can’t tell whether X truly caused Y. Only by following the path from learning outcomes to individual learning to individual performance change because of that learning to institutional impact can you determine what colleges and universities want to know: Are they getting sufficient return on the leadership development investment that they’re making?
One further benefit that derives from the Holton Model is that it allows for better alignment of the leadership development program with institutional goals. Since the entire model begins with the establishment of learning outcomes, it allows institutions the possibility of determining whether the proposed outcomes reflect the school’s strategic direction.
Adapting the Holton Model from a corporate setting to an educational setting does, of course, create several significant challenges. To begin with, developing the training outcomes for all workshops and other programs in advance, having those outcomes approved as well as aligned with institutional goals, and then tracking the performance of each person who was participating in leadership training can be a daunting task. It may be possible to do so effectively for larger institutions with well-staffed leadership programs that are offered through a cohort approach. But smaller schools (where the leadership program may well be run by only one person who also has other assignments) that provide workshops anyone may attend (and thus might serve dozens or even hundreds of people each year) will find this strategy simply too cumbersome to implement.
Second, the suspicion many academics still have about any assessment strategy—“Are you really just assessing the program itself, or is this a veiled way of assessing me
?”—will only be exacerbated by an approach that carefully monitors the performance of individual participants in the leadership training. This type of assessment could even discourage people from attending training they need by making the burden of individual performance evaluation following the training seem too threatening or simply not worth the extra effort.
Nevertheless, despite the limitations or potential challenges of both the Kirkpatrick and Holton Models, they do make it clear that mechanisms for assessing the impact of leadership training do exist; there’s no need to reinvent the wheel. And because many institutions still don’t assess the impact of their leadership development programs at all, any
approach to measuring the impact of these activities is certainly better than no
such attempt. Leadership development programs can thus choose aspects of the Kirkpatrick Model, the Holton Model, or both, as their resources allow and their philosophies dictate, at least as an initial step toward determining whether the institution is receiving the return on investment it hopes for.
Holton, E.F. (June 6, 1996). The flawed four-level evaluation model. Human Resource Development Quarterly.
Holton, E.F., & T.T. Baldwin. (2003). Improving learning transfer in organizations
. San Francisco, CA: Jossey-Bass.
Laird, D., S.S. Naquin & E.F. Holton. (2003). Approaches to training and development
. (3rd ed.) Cambridge, MA: Perseus.
Phillips, J. J., & R.D. Stone. (2011). How to measure training results: A practical guide to tracking the six key indicators.
New York, NY: McGraw-Hill Professional.
Jeffrey L. Buller is director of leadership and professional development at Florida Atlantic University and senior partner in ATLAS: Academic Training, Leadership & Assessment Services. His latest book, the second edition of
The Essential Academic Dean or Provost: A Comprehensive Desk Reference, is available from Jossey-Bass.