Tuesday, February 8, 2011

Making Service Quality Pragmatic

Do you work for an organization that clearly sees the link between IT services and corporate revenue, where IT is recognized and rewarded for those contributions to revenue? If so, you can stop right here. Congratulations. You belong to an elite group of organizations that exist only in the fantasies of the rest of us.

If you don't belong to such an organization, chances are you struggle with how to define IT's value in ways that get the attention of your partners in the rest of the business. I belong in the second group, and it appears most IT service managers do as well. It's a concept I will address frequently, and from different contexts. Lately I've been looking into the context of service quality. We all know that our shop delivers great service quality (right?), but how do we show everyone else?  My assertion is that we spend way too much time assessing technology, or how well our processes fit one of the frameworks/standards , to really see what the business sees.
Of course we provide excellent quality!  We've been verified by <fill in the blank> to prove we have the best <fill in the blank> possible!

This is IT navel-gazing at it's worst, and does almost nothing useful around determination of service quality. I'm not knocking external certifications or verifications. They can be great tools, but only if the rest of the business already buys into the fact that they measure something of value to the business, not just IT.

I came across an excellent series of blogs written by John Custy (@ITSMNinja on Twitter) addressing the topic of Service Quality Management in IT Service Management (Part 1Part 2Part 3). He applies the SERVQUAL model from the world of market research.

One reason I like this is that it clarifies the hard-to-define concept of "quality" and gives some practical ways to measure it. The model looks at "gaps" in the Service Management context:

  • Gap 1: Market information, the gap between customer expectations and what the service provider thinks it is providing

  • Gap 2: Service standards, the gap between what the service provider thinks it is providing and the service provider’s standards

  • Gap 3: Service performance, the gap between the service provider’s standards and actual performance

  • Gap 4: Internal communications, the gap between what the service provider is marketing and what is being delivered

  • Gap 5: Service quality, the gap between the customers’ expectations and their perception of the service delivery
The ultimate measurement of quality is the gap between a customer's service expectations and their perception of the delivery of the service, or Gap 5. It helps us see where things we can objectively determine, like the customer's expectations vs. service strategy (Gap 1), will ultimately affect quality of the service.

The model does an excellent job of describing the context of Business-to-Consumer (B2C) customer service. However, I believe it falls down when considering internal service management and Business-to-Business (B2B) service management. There is an important element missing from the customer side of the model:  the requesting Business Unit or Entity. Specifically, the goals, strategies, and tactics of the business entity.

An argument could be made that this is just another set of inputs that impact customer expectations, and that's likely true to an extent. What that argument misses, however, is that we're usually talking about different people setting the business strategy and tactics, vs. the people making the specific service request. That's the reason we have Service Strategy and Service Transition/Design as separate entities. We know that Service Strategy is not just an input into how we design services. It is an entirely different process done at a different time, frequently by different people.

An example might help. We've negotiated an SLA around a service that says processing of a critical task will complete within 30 seconds or better 90% of the time. During service design, we looked into beefing up the infrastructure so that processing could be completed with 10 seconds 90% of the time, but the added cost was deemed as unnecessary by our partners in business leadership. Now a line supervisor complains that the critical task is taking too long. When investigated, we find that the task's performance for this department falls within the 30 second standard, well over 90% of the time. There is definitely a large gap now between Expected Service and Perceived Service, which is our core determination of quality. The primary reason here is the gap between executive leadership expectation and the expectations of the individual service requester.

I adapted a version of the traditional SERVQUAL model, to account for ITSM and the addition of corporate entity. It creates at least two additional gaps to address:



  • Gap 1: Market information, the gap between customer expectations and what the service provider thinks it is providing

  • Gap 2: Service standards, the gap between what the service provider thinks it is providing and the service provider’s standards

  • (New) Gap 3: The gap between corporate expectations and service requester expectations

  • (New) Gap 4: The gap between the service provider's standards and the requester's expectations

  • Gap 3 Gap 5: Service performance, the gap between the service provider’s standards and actual performance

  • Gap 4 Gap 6: Internal communications, the gap between what the service provider is marketing and what is being delivered

  • Gap 5 Gap 7: Service quality, the gap between the customers’ expectations and their perception of the service delivery

The point is that the new "gaps" are things that we can measure, we can influence, and they do influence perceptions of service quality. Let me know what you think in the comments. I want to continue the discussion around how we use the gaps to create metrics that can be used to measure quality of our services.