Friday, February 17, 2012

The Measure That Matters

Is there any topic more discussed in ITSM circles than measurement? Between SLAs, response times, MTTR, average hold time, abandon rate, etc., we are absolutely bombarded with measurement "best practices." I have nothing against these measures, it's just that they don't matter.

Maybe that was a little flippant, since many of these measurements can help guide us toward better service delivery and customer service. My concern is that they don't help us actually determine whether we are doing better service delivery or customer service. Because measures like performance against SLA are relatively easy to measure, we often make the mistake of assuming a higher rate of SLA adherence means better customer service. SLA adherence has no direct correlation to customer service outcomes. It might help us achieve better customer service, and it might not. If you disagree, think of the last time you diffused a tense customer situation with SLA adherence metrics, or your amazingly low MTTR.

I'll wait. Take your time.

That's what I thought. Never. Na.Da.

The only measure that matters is the one that measures the gap between your customer's expectation, and their perception of the service they received. Yes, I talking (again) about my favorite model, the modifed SERVQUAL model.

You can read my discussion of the model here.

In my version of the model, we're talking about Gap 7 - the gap between the customer's expectation and the actual service delivered. For service providers, it is the only thing that matters. SLAs, service catalogs, continual/continuous service improvement, MTTR, abandon rates, availability, change success rates, et al, mean absolutely nothing if they don't impact the customer's perception of the service they received. This is what has earned IT a bad reputation the past several years. We're still focused on five-nines and "on-time on-budget", when our customers simply want it to do what they expect it to do, when they want it. I cringe whenever I hear someone start talking about project success rates, and then hear them talk about staying on budget. OK, great. But did the project outcomes meet or exceed original expectations?

The old inward-focused measures are still very important. They can help us determine which elements of IT's service delivery help or hurt us meet the customer's expectations. I just want those measures to stay inside the IT department.

Going back to Gap 7, how can we measure that gap? It's not necessarily a nice tidy number we can measure with precision. To help in this endeavor, I suggest Douglas Hubbard's book How To Measure Anything, and the accompanying website. It's a great book for data nerds like me. One enduring concept for me is the idea that measurement is not about absolute certainty. The purpose of measurement is simply to reduce uncertainty. How much certainty do we need in order to make decisions based on the measurement of Gap 7? Do we need more staff? Should we revamp of prioritization standards? Should we re-focus our communications plan? These are not decisions requiring highly precise measures. We won't get much value from distinguishing between a customer satisfaction score of 3.2173 and 3.2201, on a 4-point scale; but that distinction would certainly matter to a machinist when measuring centimeters of variance in a given part.

The point is that it doesn't  really matter exactly what you measure, or whether you can measure it with a high degree of certainty. Find something that makes a reasonable stand-in for Gap 7, and start measuring. It's far more important to be consistent in how you measure, than it is to be precise in your measurement. For example, I've started measuring Gap 7 in support services by sending a closed ticket survey, asking the user to rate the level of quality and timeliness they experienced. It's not perfect, but it's something all staff can point to when determining relative variance between customer expectations and service delivery perceptions. A higher number on the 4-point scale means a smaller variance in Gap 7. Now we can start measuring whether an increase in first call resolution causes an increase in survey scores. Measuring first call resolution, with no context as to the impact on Gap 7, gets us nowhere.

What do you use to measure the gap between customer expectations and their perception of the service received? I'd love to hear your ideas in the comments. If you're not currently measuring that gap, what could you use to start measuring? What works (and doesn't work) in your environment?

No comments:

Post a Comment