The Reality of Learning-Effectiveness Evaluation
Mumbai (IN), April 2014 - Fifty percent of L&D departments have never tried to conduct learning-effectiveness evaluation beyond Kirkpatrick Level 2. Thirty-five percent have managed to get to Kirkpatrick Level 3, with only 15% reaching Level 4. Tata Interactive Systems' webinar on -œLearning-Effectiveness Evaluation- highlights key challenges facing organizations and is available via video now.
Learning & Development professionals across geographies responded overwhelmingly to the challenges in learning-effectiveness evaluation during a poll conducted by Tata Interactive Systems (TIS), as part of its interactive webinar on "Practical Approaches towards Learning-Effectiveness Evaluation". The attendees, who generally served in Learning & Development functions, were polled on several parameters ranging from "how they evaluate learning effectiveness” to “how effective they are” and “what tools are used to measure effectiveness".
The Tata Interactive Systems’ webinar, presented by Poushali Chatterjee, Principal Learning Designer & Delivery Head, Kolkata Centre, TIS, focused mainly on the challenges facing L&D, learning-effectiveness models, as well as creating an evaluation plan. She emphasized the need to create an evaluation plan as part of the learning activities rather than a separate one.
"Having evaluation measures in place acts as a leading indicator, as it is an in-process measure and enables you to take pre-emptive actions to improve your chances of achieving your Learning & Development objectives," said Poushali. "A lag indicator, however, is when you measure the effectiveness when the process is over and you are measuring in retrospect. Hence, it is always important for training departments to align their objectives with the organization’s goals and then with the department’s or individual’s objectives."
The polls conducted during the webinar threw up some interesting perspectives. TIS-polled attendees responded on key challenges while measuring learning effectiveness. The result revealed that lack of knowledge about evaluation mechanisms (61%) was a primary concern, and participants also cited lack of time (33%) as another reason for not measuring learning effectiveness. This seems to turn the tables on the popular assumption that measuring learning effectiveness is itself a tough task, and indicates that perhaps learning professionals also need to increase their knowledge of evaluation mechanisms and start applying them.
In yet another poll, a sizeable majority of the webinar attendees believed that training-evaluation data would help improve employee performance as well as that of the organization. They were also of the opinion that training could become a partner to business rather than being just a vendor.
The poll further included questions regarding "organizations’ readiness to conduct evaluation at different Levels." The poll results indicated that "fifty percent of L&D departments have never tried to conduct learning-effectiveness evaluation beyond Kirkpatrick Level 2", while 35% have managed to get to Level 3, with only 15% reaching Level 4. Therefore, while participants agreed that evaluation helps increase the business value of training, most training departments are not going beyond Level 2 evaluation. This could well be due to the lack of knowledge about evaluation mechanisms cited in the key-challenges poll.
This tied in well with the objective of the webinar, in which Poushali explained how to build an evaluation plan and the kind of mechanisms that can be used. Responding to questions from the attendees, Poushali went on to explain how to address some specific challenges, like frequent attrition and whether quantity and quality are both equally important when it comes to evaluation.