This HBR Blog Network post about measuring performance reminded me of countless conversations I have had with managers about the stated importance of quality and the actual importance of productivity. It is a classic organizational screw up to say, “Quality is our number 1 priority,” but then scream bloody murder the very second productivity falls below expectations. In an instant, there goes the focus on quality, right out the window. Once you do this, none of your people will ever again (almost ever) believe in your focus on quality. You have just destroyed any future attempt to focus on quality. People will no longer listen to you because now they know that productivity is all that matters.
The root cause of this trade off is that productivity is easier to measure. Most businesses produce reports that show the productivity figures that matter most. When it comes to quality, perhaps only high performing manufacturing companies have figured out how to measure quality in error or scrap or rework rates. In the knowledge economy, it is more difficult to measure quality. The consequence is that we tell our teams that quality is our top priority, but our actions state that only productivity matters.
Imagine smoking a cigarette while telling your teenager, “Don't smoke. It's bad for you.”
Of course productivity matters. It matters a great deal. The problem is not focusing on productivity. The problem is telling people they should focus on quality, fail to [properly] measure it, and then put people on performance improvement plans the moment they focus in quality at your perceived neglect of productivity. For a manager, it is your job to select the right measurements for your business, measure them, and then align your message with your actions. The same is true for learning professionals.
Measuring Performance for Learning Leaders
If a learning professional means it, and the learning function exists to improve performance, he/she should measure performance results following learning experiences. This is such an important point that I would go so far as to say, “Stop measuring levels one, two, and three and only measure performance (level four). I know that sounds extreme to most learning professionals, but hear me out.
If you want to know if your sexual harassment training was effective, measure the number of complaints that came into HR before and after the training. Or measure the dollar amount of claims year-over-year to see whether they have changed. Did claims increase or decrease. Perhaps the number of claims increased but can be considered a good thing because more awareness flushed out those bad sexual harassers.
Is a level 1 survey going to tell you that? Is a test? Certainly not.
If you want to know if your compliance training was effective, measure safety incidents or privacy complaints or legal claims. If you want to know if your new hire training is working, measure time to proficiency, productivity or quality, thirty days after training. Measure nothing else. If people are performing to the goal, it is likely the training was effective.
This seems like a simple idea, yet learning organizations hardly ever measure performance changes that result from learning experiences. Don't take my word for that. The ASTD State of the Industry Report consistently reports, year-after-year, that a paltry percentage of organizations measure level four. So, we state that our role is to help improve performance, yet we measure learner opinions and test scores rather than performance.
What do you think? Do you think if learning professionals cared about improving performance, they would measure performance and performance only?