Transforming Learning Through Metrics
Measuring the impact of learning interventions is crucial for organizations to ensure effectiveness, drive performance outcomes, and make data-driven decisions to improve their learning programmes. Unfortunately, many training programmes suffer from a lack of clear outcomes from the start, leading to suboptimal return on investment (ROI). To address this, Learning and Development professionals must adopt a mindset for measurement, focusing on quantified business goals and meaningful partnerships with cross-functional stakeholders. By contextualizing the learning experience and managing expectations, organizations can transform their learning initiatives, maximizing ROI and achieving desired outcomes.
The man who starts out going nowhere, generally gets there.
Dale Carnegie
The Current State of Measurement in Learning
Understanding the present landscape of learning measurement is a crucial step to instigate a transformation.
For far too many organizations, the measurement of learning interventions often turns out to be an afterthought rather than an integrated part of the learning process.
ATD surveyed 779 talent development professionals. Half were managers, directors, or executives. 46% were from mid-sized organizations (500-9,999 employees), 31% from large organizations (10,000+ employees), and the remaining 23% from smaller organizations (<500 employees).
The survey used the Kirkpatrick and Phillips model to categorize learning evaluation levels.
Levels 1 and 2 were the most widely used. Level 1 measures learners' reactions (e.g., smile sheets), and Level 2 measures skills and knowledge acquisition through quizzes. Approximately 80% of organizations used these levels.
Level 3, which focuses on the application of new skills on the job, was measured by 54% of organizations through surveys or observation.
Levels 4 and 5 were less commonly used. Only 38% used Level 4, which measures business or mission effects, and just 16% used Level 5, which measures financial results.
According to the survey report, just 40% said their learning evaluation efforts helped them meet their organization’s business goals. The main barriers reported were:
- Difficulty isolating a learning programme's impact (41%).
- Lack of access to necessary data for higher-level evaluations (39%).
- High cost of conducting higher-level evaluations (33%).
Feedback collected at the end of trainings is notoriously superficial, focusing more on the immediate experience rather than the effectiveness and application of the learning. This superficial feedback gives a skewed perception of success, thereby misleading organisations and learners about the actual impact of the learning intervention and doing nothing to elevate the position of the learning expert as a valued cross-functional partner.
Case Study: Jonathan Sharpe, L&D, MedEquip Inc.

John Sharpe, a committed Learning and Development professional at MedEquip Inc., a high-tech medical equipment company, is entrusted with developing and implementing a sales training programme. His primary stakeholder, Robert, a determined first-line sales manager, needs an immediate boost in his sales team's languishing performance.
Under the time-sensitive mandate given, John puts together a sales training programme. On its completion, there’s a clear sense of accomplishment. The sales team is full of enthusiasm, demonstrating an improved understanding of the products during a standard post-training quiz.
Months later, Robert notices a frustrating lack of improvement in the team's actual sales performance. The quiz results were positive, but there’s no significant upswing in the numbers that truly matter - the sales.
As John gears up for a round two of the training a few months later, he finds himself retracing familiar steps, a feeling of déjà vu pervading his preparations. The issue is not the content, but the lack of depth in the evaluation. The quiz he uses measures surface knowledge, but no strategies are put in place to track the long-term application of the training or its direct impact on sales.
John’s training intervention is a bit like filling a bucket with a large hole at the bottom. The immediate results look promising, but with no thorough measurements or reinforcement strategies, the newly acquired skills leak out, leaving little to no impact on the sales performance.
Overcoming the Elephant in the Room
Confronting the Fear of Poor Results
The spectre of poor results can be an intimidating obstacle to implementing effective measurement practices. For many, it's easier to bask in the immediate positivity that comes after a learning intervention, like glowing feedback on a post-training survey, than to delve into the gritty details of how that training truly impacts performance over time.
However, avoiding the reality of poor results is not just an exercise in ignorance, but it can also lead to a cyclical pattern of ineffective training. When the fear of confronting weak outcomes prevents thorough evaluation, learning professionals and managers may find themselves trapped in a loop of recurring issues.
The key to breaking free from this cycle lies in embracing the possibility of unfavourable results. This is not about seeking failure, but about making failure a stepping stone to success. In fact, poor results can provide the most useful insights, revealing the gaps in training that need to be addressed, and paving the way for truly impactful learning interventions.
Barriers to Measurement
Despite the undeniable importance of measurement, several barriers often hinder its effective implementation. These include the following, along with potential solutions:.
1. Lack of time
- Prioritize measurement from the beginning: Incorporate measurement planning into the initial stages of learning and development initiatives. By including it as an integral part of the project timeline, sufficient time can be allocated for data collection, analysis, and interpretation.
- Automate data collection: Leverage technology to streamline the process of data collection, reducing the time and effort required. Use learning analytics platforms or learning management systems that can automatically capture and aggregate data, providing real-time insights without extensive manual effort.
2. Lack of clarity
- Provide training and resources: Offer learning and development professionals and stakeholders training on measurement models and methods. Provide resources such as guides, templates, and examples to enhance their understanding of effective measurement practices.
- Seek external expertise: Collaborate with external consultants or experts in learning measurement to gain insights and guidance. They can help clarify concepts, provide best practices, and offer support in implementing measurement strategies.
3. Resource allocation issues
- Communicate the value of measurement: Clearly articulate the benefits and return on investment (ROI) of measurement to decision-makers and stakeholders. Emphasize how measurement can drive improvements, demonstrate the effectiveness of learning interventions, and contribute to organizational success.
- Allocate dedicated resources: Advocate for the allocation of dedicated personnel and budget for measurement activities. Make a case for the long-term benefits and potential cost savings that can be achieved through effective measurement, highlighting the impact on learning outcomes and performance improvement.
Overcoming these barriers requires a commitment to creating a culture where measurement is seen as an essential part of learning and development — not an afterthought, but a core component of strategy and execution.
I believe learning fulfills its highest purpose with performance activiation. For me, L&D detective work is measuring fulfillment of purpose.
Kevin M Yates, 'Learning Detective'
Start with Measurement Criteria
Having clear measurement criteria from the outset is pivotal for effectively evaluating the success of learning interventions. Think of it like setting out on a journey – having a defined destination enables you to plan the most efficient route and measure your progress along the way.
If you don't know where you are going, you'll end up someplace else.
Yogi Berra
In his L&D Detective Kit for Solving Impact Mysteries, Kevin M Yates makes a compelling case for the adoption of ‘Impact Standards’. Not all learning programmes are created equal, so it is important to first identify the training programmes that “require the time, resources and effort to measure impact.”
Yates’ Impact Standards require the following questions be asked from the outset during what he calls “Impact Opportunity Interviews” to enable “proactive impact planning prior to the design and deployment of your learning solution”.
The Learning Detective's 6 Impact Standards
- Payoff: Are the significant investments for time, money or both?
- Power: Does it have support for activating performance?
- Pinpoints: Does it have measures or key performance indicators?
- Priority: Does it have leadership attention, visibility or sponsorship?
- Position: Is there alignment with a business goal or strategy?
- Purpose: Are there specific targets for performance outcomes?
Impact standards indicate how well a learning solution is designed to impact performance and business goals. The more standards met, the greater the potential for impact. Yates also notes that without these Impact Standards learning solutions are hard to measure. The standards are essential to establish the “purpose and intention for impact” first.
Examples of KPIs - via the Learning Detective
Market share | Employee performance | Customer satisfaction | Operations | Quality |
---|---|---|---|---|
Time | Growth | Errors | Volume | Engagement |
3 Useful Models for Measuring Learning Impact
a) Applying the Kirkpatrick Model for ROI Measurement
The Kirkpatrick Model, developed by Donald Kirkpatrick in the 1950s, has stood the test of time as a valuable tool for evaluating the effectiveness of training programmes. This model comprises four levels, explained very simply here:
1. Reaction: Did they enjoy the training?
2. Learning: Did they pass the assessment?
3. Behaviour: Do they work better?
4. Results: Did business metrics improve?
The Kirkpatrick Model offers a structured way to assess the impact of training, allowing stakeholders to gauge the return on their investment and identify areas for improvement.
Using the Kirkpatrick Model, Indiana University Health increased on-the-job compliance scores by over 20% and decreased medication errors with a severity level E or higher by 67% over a three-year period. Emirates Airline increased customer satisfaction ratings in four key areas and created a double-digit decrease in customer complaints.
However, the Kirkpatrick Model is not without its limitations. One important criticism is that the model doesn’t tell us much about the ongoing measurement of the four levels over time. While the learning intervention may have an initial impact, that impact may fade over time. Evaluations should be ongoing and measure lasting impact.
b) Anderson's Value of Learning Model
Anderson’s Value of Learning Model was developed by Valerie Anderson and published by the Chartered Institute of Personnel and Development. The model is a three-stage cycle intended to be applied at organization level vs. for specific learning interventions:
Stage 1: Determine current alignment against strategic priorities, e.g. driving sales, reaching a new market, leadership development. A learning programme that significantly improves the technical skills of the target audience may seem successful. However, if the organization's primary need is to enhance leadership and communication abilities, then the programme is poorly aligned with the organization's development priorities.
Stage 2: Use a range of methods to assess and evaluate the contribution of learning. Four areas of evaluation are recommended:
- Learning function measures to evaluate the efficiency and effectiveness of the learning function;
- ‘Return on expectation’ measures, for example if the goal is to shorten a process by a number of days or hours, this measure would be used;
- Return on investment measures;
- Benchmark and capacity measures, whereby learning process and performance is compared with internal or external standards.
Stage 3:
Establish the most relevant approach for your organization – this will depend on the stakeholders’ objectives and values. As such, the model suggests four categories should be considered:
- Emphasis on the short-term benefits
- Emphasis on long-term benefits
- Senior management trust in learning contribution
- The organization requires learning value metrics

The Anderson model has three notable advantages:
- It recognises that organizations are different and, as such, require different approaches
- It prioritises alignment of learning with strategy
- It addresses evaluation and learning value challenges
Criticisms of the Anderson model are that it does not offer in-depth analysis of individual training programmes; neither does it offer direction on how to evaluate individual learning and development initiatives. However, the Anderson model recommends combining multiple evaluation approaches with the Anderson model to establish whether strategic priorities are being met.
c) The Performance Consulting Model and its Role in Learning Measurement
The Performance Consulting model is a sound framework for measuring learning impact. Rooted in the concept of partnership, this 7-step model can be summarised as follows:
- The contract: the presenting problem; time available, expectations
- Who is involved?
- What are they doing now (actual)?
- What do we want them to do (desired)?
- What is the cost of the gap?
- From causes to solutions (knowledge, skills, motivation, environment)
- Action plan: who, what, when
In common with other models, the Performance Consulting model will require the investment of time and resources to derive its benefits.
Measure what is measurable and make measurable what is not so.
Galileo
Enhancing Measurement Practices: Practical Tips for Organizations
Learning Detective Kevin Yates emphasises the importance of focusing on performance outcomes vs. learning objectives for the purpose of measurement of learning impact. This perspective results in a shift in the framing of the learning. So, “You will know how to handle conflict” becomes “Manage and resolve conflict by interpreting behaviour instead of responding emotionally.” As such, outcomes are action-based and describe the skill or capability.
Incorporating diverse evaluation methods at different stages of the learning process can provide a comprehensive understanding of the effectiveness. For example, using pre- and post-training assessments, job simulations, and on-the-job observations can assess knowledge acquisition, application, and behaviour change. .
Leveraging technology-enabled learning analytics platforms can facilitate data collection and analysis, allowing for real-time tracking of learning outcomes and performance improvements. Regularly reviewing and analyzing the collected data, along with involving stakeholders in the evaluation process, can provide valuable insights for continuous improvement.
Organizations must foster a culture of shared accountability and prioritize measurement in learning and development strategies. The stakeholders initiating the learning request should take responsibility for defining desired outcomes and collaborating with their learning partners on evaluation methods to assess their achievement.
The Heart of Measurement: Questioning, Evaluating, Re-visiting the Outcomes
Measurement, in its essence, is not just about collecting data or ticking boxes. It's an exercise that demands we dive deep into the heart of our efforts and initiatives, asking the tough questions that some might shy away from. This translates to questioning the efficacy of our strategies, the relevance of our content, and the resonance of our delivery methods.
A few useful questions include the following:
- Do we know what we are trying to achieve and how it links to performance?
- Are our efforts resulting in the desired learning outcomes?
- How will we know if the skills being taught are being effectively implemented on the job?
These are some examples of questions that a committed learning and development professional or sales manager needs to ask.
Make an IMPACT
The goal is to turn data into information, and information into insight.
Carly Fiorini
Measurement is key to Purposefully Blended’s signature IMPACT approach to learning and development. We are trusted by global clients to measure learning impact and maximise the ROI of their learning and development initiatives. For information on how we can support your organization, get in touch today.
How's Your Organisation Faring?
Performing in a volatile, unstable, complex and ambiguous world warrants support from a trusted partner. Purposefully Blended continues to support global Learning and Development Managers with the capabilities or additional expert resource they need to identify, build and implement effective blended learning solutions consistently and at scale. We continue to equip First-Line Managers with coaching capabilities that embed, apply and sustain the learning. When these two roles work in harmony, they have a dramatic, transformative impact on outcomes.
Interested in getting our help to drive performance in your organisation?
Lucy Philip, Purposefully Blended, Founder
Purposefully Blended founder Lucy Philip founded the company in 2015, out of a profound sense of mission and possibility.

A highly experienced leader, ICF certified coach and mentor to Learning Partners and Leaders, Lucy has witnessed firsthand the unique challenges and pressures faced by those in Learning and Development (L&D) and Leadership Roles.
Purposefully Blended
Purposefully Blended is a boutique Learning and Development Consultancy that blends learning design expertise with high-impact leadership practices to drive transformational change in both organisations and individuals.
Over the last decade the company has established a strong reputation for helping global organisations through tailored programmes that incorporate formal, informal and mentoring coaching approaches to learning.
We support and develop leaders at all levels to develop the confidence and skills around: Positive Intelligence (Strength of Mind), Emotional Intelligence (Depth of Heart) and Intrinsic Motivation (Purpose and Drive).




