Using the same numbers as KPIs and decision guides can cause counterproductive staff actions and damage your business in ways that often go unnoticed. We know we should be using data to measure and improve our business but how do you design metrics to avoid harm?
A management team at a consulting company ran sales call scheduling competitions when the numbers weren’t where they needed to be. Volume of sales calls is a common leading indicator of revenue in a B2B client service environment. Simply put, if you don’t have enough calls, you don’t have enough opportunities to close business, and you’ll have no chance of hitting your revenue targets. Competitions rewarded staff who achieved the highest volume of call bookings with prizes and accolades.
Competitions generated lots of calls and the metric went up, problem solved right?
Well yes, but they’d created a new problem.
KPIs (key performance indicators) or business performance metrics are numbers designed to inform you of the state of a specific area of operation.
How many sales calls do we have scheduled?
The context of the metric tells you if it is good or bad.
What is our expected conversion rate? What is our average sales price? Do we have enough calls to meet revenue target?
Data driven decisions represent a different setting. Decision metrics involve audience perception, how they feel about their perceived input to the metric outcome and what actions they will take to change the outcome.
In this case, how do I drive call volumes? becomes how can I win the competition?
When metrics move settings, they can take on a different meaning in the eyes of the new audience. Although successful as a KPI, using the same number as a decision enabler can drive bad behaviours, incite bad decisions, and even destroy morale.
These actions are stopping you from reaching your goals as fast as you could and even sending your performance backwards. When a metric is moved to a new audience it must be translated to ensure the right behaviours and decisions are driven or your business will suffer.
Management were using the call volume metric to assess whether they would hit financial target but when they moved it to represent an individual target, they were inadvertently driving the wrong behaviours amongst staff and discouraging their best performers.
Prize winners were often staff who received mediocre performance reviews. Competition frequency increased as the company neared its year end and coincidentally so did the volume difference between calls scheduled during and out with competition windows. Staff knew they were in competition with each other so didn’t actively share successful techniques.
Staff were sandbagging, procrastinating, and gaming to win themselves a new PlayStation and shake hands with the MD. Using the same metric as a key performance indicator and to drive staff actions caused detrimental behaviours amongst staff.
I know people that avoid Lidl for this reason. Lidl’s staff are measured on checkout item throughput rate. In a management context throughput rates will help them understand how quickly queues are reducing, analyse if certain products are causing delays and identify staff who require more training. The metric enables them to improve the overall experience of the customer in store.
As a decision driver for the staff, however, the customer is rushed through, your goods pile up at the end of the belt far faster than you can stuff them into your bag. I’ve also observed some staff asking ‘cash or card’ before you’ve had a chance to look up and they open the register if you say cash. I suspect the timer stops when the register opens. This always makes me feel panicked, I like the Lidl stores that have nice calm self-checkouts.
**For the record this analysis is based on my observation, I don't have first-hand information about the metrics at Lidl.
Has your business moved metrics? Are you unwittingly causing harm? Are the same metrics appearing in your management reports as on your team’s individual performance reviews?
Good news it you can avoid or correct this issue.
For the consulting business, originally, sales managers were trying to predict if they were going to reach financial targets. Their metric was designed to indicate to management that something needed to be done about call volume if they were going to hit target. A well-designed metric for its purpose.
When they moved this metric to a new audience its purpose changed. They were now trying to drive behaviours.
This metric had become an individual competitive target ‘what do I need to do to win this competition?’ It represented something different for this new group. Their behaviour was to schedule as many calls as they could during the competition window or to become frustrated that they had already done their job and resent that management was rewarding the poorer performers … again.
As a short-term, last ditch solution this would work well but repeatedly using this metric in this way was not improving their performance sustainably.
When designing any metric, you need to think about its purpose.
Are you trying to measure something that has happened? Are you trying to predict your future outcomes? Are you trying to drive decisions? Are you trying to adjust behaviours?
Once you know its purpose then ask yourself:
Never start with the data you have, always start with the question; a common mistake is squeezing readily available data into a metric rather than ensuring you paint an accurate picture.
If the metric is intended to drive decisions or behaviours; first think about your goal. I’m willing to bet its not to increase a metric, it’s to improve performance by orchestrating behaviour or enabling good decision making among your people.
Improving performance is about the behaviours, actions, and decisions of your staff.
What do you want them to do? How do you want them to act?
In our example, management wanted staff to schedule more calls. They didn’t want to run competitions, they needed the right level of call volume to be achieved across each quarter. That was their goal. The necessary behaviour was consistent high-volume call scheduling, logged in the system as they were scheduled. Those actions that would drive their end metric.
They could achieve their goal more effectively if higher performers enabled the lower ones; if higher performers felt rewarded, they would drive improvement across the whole group. If staff logged calls as they were scheduled rather than saving them for competitions management would have a more accurate understanding of the volume scheduled.
What’s the best way to achieve your goal? What actions would help drive it forward?
Redesigning the metric for the new understanding of its purpose, goal and the desired staff actions would drive higher performance. Rather than measuring number of calls, we took percentage scheduled as a baseline and measured performance improvement. This rewarded highest performers from the outset and removed the sandbagging behaviour. They set up mixed teams so that higher performers could help lower performers improve and pooled the prize funds to make rewards more meaningful for staff. Outside of competition time it was in the staff’s interest to schedule as much as they could, no need to game the system for competition time.
Changing the metric used for staff measurement ultimately drove the management metric which in turn improved the business. It also improved morale, enabled cross learning, and ensured the highest performers were retained.
Optimising metrics for the eyes of their audience may be very subtle but the impact of taking the time to do this exercise can be dramatic. Thinking about the human element of metric design is critical for successful data driven decision making. The source data will likely be the same but making sure you answer the following questions for each of your decision enabling metrics will help you avoid this trap.
Match your measurements to their purpose and audience and you’ll see improvements not only in the numbers but in your staff’s morale and performance too.