A recent report from the World Economic Forum and PwC found that investment in closing the skills gap could boost GDP by $6.5 trillion by 2030. That’s a lot of money at stake, and yet most companies measure the impact of their investments in reskilling programs using soft metrics like completion rates, satisfaction score, or employee feedback. The author suggests four sets of measures that, taken together, could form a comprehensive scorecard for your company: 1) Cost metrics, or comparing the costs of reskilling with the costs of not doing so; 2) Productivity metrics, or quantifying the speed or effectiveness with which the skills are deployed; 3) People metrics that measure the stability and satisfaction of your workforce; and 4) Sponsor satisfaction metrics, or asking whether managers see a difference in their team’s work after the reskilling effort.
As automation and AI increasingly take hold in the corporate world, many companies are increasing their investments in skill-building of all kinds: upskilling, reskilling, and even “outskilling” – where employers train employees who are being laid off to help them get their next job. Some of these investments help workers adopt new tools to speed up parts of their jobs. Others aim to fill open jobs within the company, addressing the paradox wherein automation and AI cause jobs to disappear from one part of the company but also cause a shortage of skilled labor elsewhere.
The coronavirus pandemic has driven companies to increase these investments, as the underlying forces of automation, AI, and digitalization have accelerated.
And yet, the way companies measure the impact of these investments remains fuzzy. In a global survey of learning and development (L&D) professionals, LinkedIn found that the majority of measures used to assess the impact of training programs are soft metrics, like completion rates, satisfaction scores, and employee feedback. Comparatively few respondents used harder metrics, such as increases in employee retention, productivity, or revenue.
CEOs and CFOs should demand better measures, particularly as the amount of money at stake continues to increase. A recent report from the World Economic Forum and PwC found an effective investment in closing the skills gap could boost GDP by $6.5 trillion by 2030.
Over the last decade, I’ve helped design and deploy skill-building programs at dozens of large companies around the world. I’ve seen almost as many ways of measuring their impact. Looking back on the programs that were considered successful, I’ve identified four measures that, taken together, can inform a comprehensive scorecard to measure the return on investment of skill-building programs:
1. Cost Metrics
These metrics compare the cost of reskilling with that of not doing so. To calculate, first add up the total cost of your reskilling initiative, including direct training costs, the employee’s time off work, and any administrative costs. Research shows this averages to $24,800 per worker.
Now add up the costs of not reskilling. Would you have to hire new people to fill roles? Consider recruiting and onboarding costs. Would you have to lay off an employee if you weren’t going to reskill them? Consider severance costs and the administrative costs of managing that difficult process.
Even if you can’t neatly connect reskilling costs with layoff savings, directional cost-savings metrics can still be powerful. For example, Capital One, a U.S. bank, launched the Capital One Developer Academy to train young liberal arts and humanities graduates in software engineering. Instead of competing with tech giants to pay a finite pool of software engineers increasingly exorbitant salaries, Capital One found it could increase supply by building its own talent pipeline.
If your reskilling initiative is associated with digital transformation, a cost-savings calculations may be the only metric you need. A report commissioned by General Assembly (where I used to work) and produced by Whiteboard Advisors found that for expensive roles like software engineer and data scientist, reskilling can pay for itself as much as six times over.
Of course, if you can point to how skill-building helped your company make money, that’s even better. A global professional services firm found that the “billing rates” they could charge for consultants who had been through a data analysis upskilling program went up 3%, more than justifying the investment.
2. Productivity Metrics
These metrics quantify the impact of the skill-building program by measuring the change in the speed or effectiveness with which that skill is deployed. For example, a team of analysts at the insurance company BNP Paribas Cardif participated in an advanced data analysis course where they learned how to use new tools, like Python. Shortly after, team members reported being more efficient thanks to the features of this high-level programming language. One participant even described how she was now able to perform a routine task that previously took a full hour in only five minutes.
In another example, the global beauty products giant L’Oréal put its marketing team through an intensive workshop on search engine optimization. Shortly after the workshop, the team found that its core products saw a spike in traffic from search engines.
And in a third example, a team at a U.S. health care insurance company was able to save more than $9 million by applying a tool they learned how to build in a data-analysis course.
To incorporate productivity metrics into your ROI scorecard, start by identifying your desired outcome. Too many skill-building programs name the topic in which participants will be trained, but not the metrics the organization uses to track success. For example, a “Data Analytics Workshop” at an insurance company may be better titled “Boosting our Claims Response Time Through Data Analytics.” If the result you’re aiming for is clear, participants will have a better sense of how to use their new skills, and it will be easier to measure whether the result is actually achieved.
3. People Metrics
These are metrics that measure the stability and satisfaction of your workforce. Employee retention is a good example of a people metric. There is a well-researched positive link between the level to which a company invests in developing its people and their propensity to stay with the company. For example, an IBM study found that new employees are 42% more likely to stay if they are receiving training that helps them do their job better.
You can build off this. Try to quantify this link for your skill-building program. A simple way to do so is to tag participants as part of any regular employee engagement surveys your company conducts. Depending on the software you use and your policies about data collection, you can do this by having employees self-report the training programs in which they’ve participated recently or by connecting your “learning management system” with your employee survey platform. If you see a meaningful difference in job satisfaction between those who participated in your program and those that didn’t, add that to your scorecard. Some companies guarantee their investments improve retention — for example a large U.S. bank asks employees to sign commitments to stay for a certain amount of time in exchange for the cost of their reskilling program.
Other people metrics can play into your scorecard as well. Talent attraction is one example. A global professional services firm found that when they advertised their data science upskilling program on job listings, they saw a 9% increase in applications.
A final example of people metrics that drive skill-building programs comes from David Henderson, chief human resources officer at Zurich Insurance Group. Henderson has made reskilling a top priority. One of the metrics he focuses on is the percentage of jobs that are filled by internal candidates vs. external ones. As the insurance sector becomes increasingly technology-driven, the only way to fill new jobs with internal candidates is to upskill and reskill your existing workers. Over the last few years, Zurich has gone from filling a third of its positions internally to two thirds.
4. Sponsor Satisfaction Metrics
Most training programs ask participants whether they were satisfied with the course. This is a helpful data point but doesn’t necessarily correlate to true impact or return.
A more effective approach is to ask managers and leaders whether they think the training was useful for their team members. To get a more accurate answer, ask the question a reasonable amount of time after the training is completed when managers have had a chance to see a difference in their teams’ work.
A recent McKinsey & Co. survey is a good example of this. The firm asked more than 1,200 executives about the nature and impact of their investments in reskilling. Specifically, executives were asked to rate the impact of their reskilling investments across eight different key performance indicators, from employee satisfaction and retention to customer experience and brand perception. If done consistently across different reskilling programs, with responses from the frontline managers whose teams participated, similar surveying could be a useful tool for measuring your program’s impact.
Creating Your Scorecard
Too often, CEOs and CFOs ask their learning and development (L&D) teams to demonstrate ROI after a training program is complete, without being clear about the return they want in the first place. L&D teams should require a definition of success — and advise their stakeholders on how to articulate it in a way that everyone can buy in to — before launching any skill-building program. Is the goal to fill new jobs with existing talent? How many? To improve productivity on a particular task? If so, by how much? Or perhaps to improve morale?
This clarity is critical. It should inform every aspect of the program’s design — the curriculum, the branding, how participants are selected, what activities they are expected to do as a result of the program, and how and when those activities are measured.
Another use for the scorecard is to keep track of impact stories related to your training programs. Good politicians know that dry policy and charts don’t sway voters — stories do. Take the time to interview a few participants: What’s their background? Why did they enroll in the course? How are they using their new skills? What made their experience successful?
At my previous employer, we kept a log of great student stories to which our course producers and instructors all over the world would regularly contribute. One of my favorites is the story of Anthony Pegues, who was a janitor before enrolling in our “Web Development Immersive” course, and now works as a software engineer. Another is that of Jian Wu, a trading strategist at the financial services giant State Street, who used a technique he learned in General Assembly’s “Data Science Immersive” class to automate a part of the way the firm predicted lending rates. His algorithm boosted performance by 15% compared to conventional methods.
As reskilling and upskilling take greater priority on the executive agenda, the pressure to justify investments will increase. With clear alignment on the desired outcome between executives, L&D professionals, and participants, organizations will find this justification easier to provide.
Editor’s Note: Harvard Business Publishing has a content creation and distribution partnership with Emeritus. Neither the author or editor who worked on this article are involved in that partnership.