A Blog by Jonathan Low

 

Feb 28, 2011

Psychometrics: Affinity-Based Routing and Call Center Performance

Advances in using data to improve performance continue to expand our understanding of how people think, feel and are motivated. The following article describes the practical application of this approach to call center management. The implication is that the next time you call a company for any sort of service, you may find that more thought has gone in to how your questions is handled than may be apparent:

Marty Lariviere in the Operations Room via Thierry de Baillon:

"We have had a couple of recent posts about firms trying to get more out of their call centers through psychometric data. The idea is that by classifying both customers and agents along psychometric dimensions, the firm can route callers of a particular characteristic to the type of agent that is most likely to lead to a good outcome (where “good” is presumably defined based on what the firm wants). I have to admit that I am not overly familiar with what psychometric measures they are using and am not sure how well they can measure these with infrequent customer contact. At some level, this starts to sound like whether a libra should date a taurus.

With that background, I found a recent Sloan Management Review article really fascinating (Matchmaking With Math: How Analytics Beats Intuition to Win Customers, Winter 2011). It is an interview with Cameron Hurst, a VP at Assurant Solutions. Assurant Solutions sells credit insurance. You pay them every month and then if you are, say, laid off they help cover your credit card bills. What customers pay ranges from $10 to $80 per month and it is not hard to see that some people may have second thoughts about paying that. What seemed like a good idea six months ago might not seem worth $20 now. Hence, their call center plays a key role in keeping customers. When customers get cold feet, it is up to call center agents to “re-sell” them on the product and retain the business. And that is where “affinity routing” comes in. They brought in some business analytics experts who already worked in the firm but doing actuarial work and such and asked them to look at the call center.

The first thing that was interesting about their approach was that rather than thinking about the average speed of answering phone calls, or the average “handle time,” or service level metrics, or individual customer experiences or using QA tools to find out what we did right and what we did wrong — all the things we usually consider when looking at customer and representative interaction — they started thinking of it purely from the perspective of, “We’ve got success and we’ve got failure.”

Success and failure are very easy things to establish in our business. You either retained a customer calling in to cancel or you didn’t. If you retained them, you did it by either a cross-sell, up-sell or down-sell.

So this is what they started asking: What was true when we retained a customer? What was true when we lost a customer? What was false when we retained a customer? And what was false when we lost a customer? For example, we learned that certain CSRs generally performed better with customers in higher premium categories while others did not. These are a few of the discoveries we made, but there were more. Putting these many independent variables together into scoring models gave us the basis for our affinity-based routing.

So what is cool about this is that it is data driven but basically agnostic about why it works. It is essentially a purely engineering approach over a psych-grounded approach. They have segmented customers based on their monthly payments etc but not explicitly on psychometric measures.

[W]e do know that, for instance, certain CSRs perform well with customers that have $80 premium fees, but they don’t do so well with customers that have $10 premium fees. We don’t necessarily know the reason why. Nor do we need to. And therein lies the difference. In our system there isn’t a lot of science behind why these differences exist, why a rep might be good with $80 versus $10. It’s just evident that that person is good with a certain customer type. So we operate off the fact that it’s true, based on the body of data that we have about the customer base and our past CSRs’ interactions with those customers.

Then there is another twist. You may have a caller pigeon holed and know just who they should talk to. That doesn’t mean that person is available. Hence there is a trade off in waiting time and caller-agent affinity.

There was a problem we didn’t quite know how to solve right out of the gate, and that was the fact that the best matches are almost always not available. In other words, if we have 50 callers in queue and 1,000 CSRs on the floor, we can create 50,000 different solutions, and we make those calculations 10, 15 times a second. One of the 1,000 CSRs is the best match, so that’s the score to beat — the number that shows how often we make that perfect match.

The vast majority of the time, though, those matches weren’t immediately possible because that CSR was on the phone, so we had to factor in another predictive model, and that was “time to available.” That’s not a massively complex model, because the industry has been solving that kind of problem for a long time.

But when you layer “time to available” into the actual scoring engine, you get some interesting results. If an agent’s average handle time is three minutes, 30 seconds, and he or she has been on the phone three minutes, 15 seconds, then we can predict they’re about 15 seconds away to available. Then we can weigh in our prediction of customer tolerance or customer survivability — how long they’re willing to wait in the queue before just hanging up.

We know how long we keep customers in queue. We know what the outcomes are when they’ve been in queue, and we can find out where the curve starts to steepen in terms of abandon rates or bad outcome rates. We connect that information with our CSR’s predictive availability curve. If the optimal match is too far away, maybe 45 seconds or three minutes away, then the score for that optimal match becomes dampened and someone else might look more attractive to us. Because while they may not have perfect affinity, the fact that they’re going to become available sooner certainly makes them look more attractive to us.

The overall payoff to this has been pretty impressive. They have boosted their retention rate from 15% or so up to north of 30%. What’s more, they are saving more big accounts. They weight the calls coming in by the premiums they pay and estimate what fraction of “dollars at risk” they have saved. That number had also been in the 15% range and is now over 45%.

This is just a great article. Clearly, many firms are interested in getting more out of their call centers and that takes some creative thinking. That is, the know-how to efficiently get wait times down to 20 seconds or whatever target one wants is widely dispersed and any large firm should be able to do that. Going beyond these industry norms requires doing something extra. What is nice about this article is that it actually provides some detail without pretending to have a universal theory on why it all works. Second, it explicitly recognizes that segmenting both agents and customers at a fine level has to impact wait time.

One question I have is how this has impacted the agents and the call center’s turnover rate. I as an agent might be aces with callers in the $20 to $30 range but I might not necessarily like to talk to them. Or I might just prefer to talk to a range of customers. Either of those could lower my job satisfaction. Alternatively, with the new system, I would have a higher success rate and that might be enough to improve satisfaction and lower turnover.

0 comments:

Post a Comment