Sunday, December 2, 2012

Service Management Goals and Service Level Agreements


(Much of this article is also posted as part of a co-authored piece on ServiceManagers.org)

What is the relationship between Service Level Agreements (SLAs) and Service Management goals? A common misapplication is that your SLAs are your goals, or at least that they are a part of your goals.

I'd take the goals discussion a level higher and focus on missional goals first. Using a soccer analogy (sorry, I'm an American ... "football"), SLAs are not the goal. Scoring more than the opponent is the goal. SLAs are more like defining the positions for the players, and communicating what the purpose is for each position. Then you have tactics, the designed plays and maybe a few pre-planned what-if scenarios. These are your defined processes, which should, AT A MINIMUM, meet the SLAs. As the coach, you have selected who should be on the field at any specific time, usually based on how well they are suited for the current/adjusted objective. It could be a push for scoring when you're behind late in the game, or it could be defending a lead while taking as much time as possible off the clock.

The point is that SLAs are not the primary means to meet the overall goals. They are certainly one of the tools you have for communicating expectations, but we also understand that the real goal is to make money, to delight the customer, to maximize their experience, etc.

To summarize this perspective on SLAs:

  1. SLAs are one tool, out of many in your SM toolkit.
  2. Poorly done SLAs (badly written, poorly negotiated, poorly communicated, etc.) will often inhibit excellent service management far more than any benefits you get from simply having a defined set of expectations.
  3. Even well written and communicated SLAs often degrade the customer experience. Too much focus on SLAs in order to meet goals can inhibit talented service delivery people who can read a situation and determine a better means to achieve the customer experience goals. I've seen too many scenarios where talented, creative people were ignored off pushed aside due to SLA adherence being the exclusive way to meet goals. I know it sounds far fetched, but it does happen.

Customer experience strategy can be a more effective way to define expected outcomes, while taking individual talents into consideration. I highly recommend the book "Outside In: The Power of Putting Customers at the Center of Your Business" as a great approach to this perspective.

It's not "either-or", but rather "yes-and".

Sunday, November 11, 2012

Is ITIL the Enemy?

Is ITIL the enemy? At the end of ServiceNow's Knowledge12 conference in May, a presenter noted that the term "ITIL" had barely been heard the whole week. A decent amount of cheering followed. My take was that ITIL concepts had become such an ingrained part of IT, that talking about those concepts as "ITIL" was unnecessary. Now I'm not so sure.

I was messaging with a former colleague the other day, when she commented on how poorly ITIL is perceived at her current company. "ITIL ... (was) executed in such a half-witted fashion that the amount of overhead and time wasted significantly increased." Like most ITIL failures, it was done as an "ITIL implementation" project. It got me thinking about my own perspective, and this was my response:
Believe it or not, I take a pretty cynical approach to ITIL; not so much in the framework itself, but in the way most companies try to implement it. First and foremost, any attempt to "implement ITIL" is doomed to failure. Most companies try to implement it like it's an ISO standard, like the closer you match the guidelines, the better. I laugh at CMMI assessments of ITIL maturity. The base assumption is that more adherence to the framework = more "mature". That is not the point of IT service management at all. You could have the most clear processes in the world, and still not be meeting or exceeding corporate expectations. I use ITIL as a means to help bridge gaps between business/customer expectations of IT service, and their perception of the actual service provided. If they expect Apple Store service and get what they see as Sam's Club outcomes, there's a huge gap that ITIL (and CoBIT, ISO 20000, etc.) can assist in bridging.
My experience is that ITIL cannot be a goal, and should not be used as a measure of service management success. However, elements of ITIL, such as Service Strategy, can be an incredibly useful tool after you figure out where the customer-provider disconnects exist. What do you think? Is ITIL getting in the way, or does it remain a useful tool for helping address service problems?

Monday, November 5, 2012

Lies, Damned Lies, and Statistics: 7 Ways to Improve Reception of Your Data

* Now with a bonus 8th suggestion! Thanks to Stephen Alexander.

"There are three kinds of lies: lies, damned lies and statistics." I hate this quote almost as much as "first thing, kill all the lawyers", and for essentially the same reason. They are both applied wildly out of context, to the point that the meaning assigned to them are almost the exact opposite as the original quote intended. Shakespeare was not talking about a hatred for lawyers. His character, the comic-villain Dick The Butcher, was talking about how a world without lawyers would be a great way to start the utopia dreamed of by murderers and thieves. Shakespeare was not espousing the virtues of lawyers, as some have attributed; but he certainly wasn't saying that killing all the lawyers would be good for humanity either. There's nuance in the comedic moment.

The same is true for the "lies" quote. It is from a Mark Twain article (later included with a series of articles to form a book), in which he was supposedly quoting former British Prime Minister Benjamin Disraeli. There is much debate over who really originated the saying, but that's not my concern here. What bothers me is the haphazard application of the saying, as if it is sufficient proof to indicate the unworthiness of data-based decision making. Of course, all decisions are based on data. Even a hunch is, essentially, a data point.

Even if it bothers me, we live in a world where statistical data is both revered and disdained. If the data supports your idea, it's great! If it doesn't, it is suspicious. In the data-obsessed world of IT Service Management, of which I am one of the chief obsessors, we need to keep some perspective when it comes to statistical information. This became relevant to me one day as decisions were being made around me that were based not on statistical data, but on a series of anecdotes instead. It was assumed that, because the anecdotes appeared to contain similar themes, we could/should make high-impact decisions based on them. The statistical data was presumed to be irrelevant due to the fact that the anecdotes indicated that the data was missing critical information.

It hit me that I was not entirely right, and the others were not entirely wrong. There is significant nuance involved, where the two types of data (statistical and anecdotal) are both needed. That led me to consider some suggestions regarding how we position "data" in the context of ITSM decision making.
  1. Decisions are based on data. All conclusions are inherently based on data of some sort, some qualitative and some quantitative. In the absence of trusted, useful statistical data, decisions will be based on anecdotes, whether or not they represent truth.
  2. Statistical data must move from an untrustworthy state to a trustworthy state. It cannot and will not be used for decision making until the decision-maker trusts the data. We cannot assume that because we have numbers that the intended audience for the numbers will believe them.
  3. Don't get bent out of shape when your numbers are not immediately received as Truth.
  4. Presenting data consistently is far more important than the precision of the data. Be persistent and consistent in how you present and interpret data. This cannot be stressed enough. There is no such thing as the perfect data, so stop looking for it. I used to constantly change the data I presented, hoping that the "next version" would catch on, that everyone else would suddenly get it. The opposite is true. When the data presented is changing all the time, you come across as someone with something to hide. Your credibility is shot.
  5. Find out how your data is being received. "Build it and they will come" does not apply to metrics. Ask intended recipients what they think of the data as presented. Is there a way to make it more clear? Are there concerns about data accuracy?
  6. Your data presentation must be actionable. Take action on your data, and teach others how to take action based on the data. If the information is not actionable, it will be ignored and mistrusted.
  7. Anecdotes provide great information. Complaints are amazing opportunities to focus your data queries. For example, I found that not all requests were coming through the Help Desk, so the data regarding quality of Help Desk service was not complete.  Before I could make decisions based on the data as captured, I had to understand why requests were not coming through the Help Desk. Of course, you also need to find out whether that is a good or bad thing, but that's another discussion.
  8. * Your data must be relevant to the recipient and the context. Often overlooked, but essential to the credibility of your data. Before publishing or presenting your data, make sure you can answer this important question: Why will my intended audience care or find this relevant? If you don't have a clear and concise answer (no more than one brief sentence), your data is probably not ready for consumption.
What would you add to the list?

Thursday, June 7, 2012

Social is changing the game, in more ways than you think

 

Allow me to move off ITSM a bit. Trust me, I'll get back to it.

I am not a social media expert, and I'm skeptical of many self-described experts. I did, however, recently hit a Klout score of 51; so I must be doing something right, at least regarding social influence and reputation (See "The Reputation Economy is Coming - Are You Prepared?" AND) if you believe in Klout's interesting, if flawed, influence algorithm.

What is most interesting about Klout is not the actual score, but the IDEA that the value of your sharing is based on the usefulness of what is shared, and less to do with speed and frequency. Feel free to disagree with how Klout calculates that value. It's much harder to disagree that the value of social sharing is far more based on the perceived quality of the shared content, as opposed to the speed in which you can register a comment, opinion, or decision.

A recent Forbes article on the coming reputation economy makes the following point:
The economy is moving in one direction and one direction only. Take time to invest in your online reputation and you will be more confident, more connected, and more desirable to work with.

But how do you invest in your online reputation? Think about the poeple and organizations you follow online. I'm not just talking about Twitter follows, but that's part of it. Who's posts do you pay the most attention to on Facebook, Google+, Pinterest, LinkedIn, and yes, Twitter? We follow people who provide the most value. It could be entertainment value, professional value, home improvement value, etc. We tend to value quantity of posts, artfulness of the presentation, self-promoion, and quick judgements much lower in the social media context, compared to in person interactions.

Your character/reputation/influence is becoming strengthened by the value of the content you create and shepherd. The 20th century "conventional wisdom" rewarded extroverts, to the point where introversion had essentially become a handicap to overcome. The Old Boy network/corporate boardroom ideal that grew from the Harvard Business School's over-reliance on extroversion as an essential trait for success, is starting to die. Most people just haven't noticed it yet.

Think about it. How many folks reading this post are more introverted than extroverted? If you've got a high Klout score, how much of that is based on dominating the social world with quantity over quality? Unless you are a celebrity, no one cares what you have to say if you don't have useful content. No one caring = lower Klout. Limited, thoughtful, useful sharing = higher Klout.

Sounds like an introvert to me.

Wednesday, May 2, 2012

ITSM Maturity Assessments: A Value-based Approach

I am not a big fan of ITSM maturity assessments.  Don't get me wrong, I think we should DO maturity assessments; it's just that they are so frequently done poorly, with useless, maybe even dangerous, starting assumptions.

  1. Improved process maturity is an end goal in and of itself

  2. Interdependence between individual processes is either negligible or non-existent, in the context of measuring maturity

A typical maturity assessment starts with the assumption, implicit or explicit, that improved process maturity is an end goal in and of itself.  Here is a great example of what I mean, based on the CMMI model of evaluating maturity.  The definition of Level 1 maturity for Incident Management indicates there is no defined owner of the process.  Part of reaching Level 2 maturity is identifying a single process owner.  I agree that having a single owner of the process is a good thing, but what goal does it help achieve?  Moving to Maturity Level 2?  Congratulations.  My concern is that many maturity assessments end there.  You're at Level 2 (or 4, it doesn't matter).  Now what?

The answer to this question depends entirely on why you were doing an assessment in the first place.  Were you just wanting a benchmark to compare against other companies, or against some later point to measure improvement?  Fine.  Let's say you came out slightly ahead of your peer companies in most process maturity levels.  Does that make you a better service organization?  Not necessarily.  It is certainly one characteristic you could use towards making that determination, but there is no guarantee that you are better just because your maturity levels are higher.

Think of it like a basketball team.  Your team, across all five positions, is slightly taller than players on the other teams.  It very well could help you beat your competitors.  Or maybe your teamwork assessments are higher than your competitors.  Your team passes very well, shoots accurately, and plays great team defense.  However, another team has 2 undisciplined players with raw talent that is so great, any normal attempt to stop them is useless.  The make up for lack of discipline with pure athletic talent and effort.  On defense, wild leaps at the ball lead to frequent steals.  If your players tried that, they would look like buffoons, and end up in lopsided defeats.  This team, however, wins 2/3 of the time.  They might lose big every once in while, when one or both or their stars are off their game; but their overall results speak for themselves.

IT departments can function like that.  We have to accept that disciplined process adherence doesn't always work well.  I've seen IT shops where a few insanely talented engineers or developers make amazing things happen, with no concern over following procedure.  It works because it works.  The business outcomes are fast and strong enough that they can withstand the occasional failure.  Besides, the super-techs are so good that they can fix their failures and still look like heroes in the process.

In reality, no one wants to run their shop that way.  We know that well-defined processes that also allow for calculated risk taking ultimately lead to better results.  Ultimately we are talking about Value.  What is the value of knowing your process maturity levels?  Very little, if it doesn't lead to creating business value.  What is the business value of having a single Incident Management process owner?  In and of itself, absolutely nothing.

And that's where most maturity assessments fail.  They tell you how well you follow ITIL, COBIT, ISO/IEC 20000, etc. standards and guidelines.

As part of my presentation for the upcoming ServiceNow Knowledge12 conference, I will present my company's example of how we looked at value:  The old technology focused way, and a newer business value focused way.

Several years ago, we did a self-assessment for a new CEO, presenting the results as the <Insert booming voice here> STRATEGIC TECHNOLOGY READINESS.




We assigned color codes for each block, Red, Yellow, or Green,  indicating how "ready" each component was for the future.  There was a time we could get away with this sort of presentation to the business.  Remember the days when we could tell the CEO that, unless the flux capacitor was upgraded this year, the future of the business was in trouble, and they bought it?  Believe me, that's how they heard the message.  We were probably right, too; but that's beside the point.

We presented the STRATEGIC TECHNOLOGY READINESS to the CEO so that he could better understand our strengths and challenges.  You read that right: It was about IT's strengths and challenges, as we saw them.  Here's what we saw:


Wow, that looks pretty good.  IT has awesome people, and we excel at the things the business used to expect from IT: Reliability, Security, and Architecture.  To be fair, Architecture was probably more for show, but it sounds cool.  The areas of struggle were the things the business was responsible for: Priorities, the Applications, and Business Processes.

Unfortunately for us, the IT landscape in the U.S. changed soon after this time.  We were an awesome IT Department, yet started seeing signs of trouble:

  • An internal survey included IT as an area needing improvement

  • Frustration from senior leadership regarding technology modernization

  • Slow implementation of requested new technology

  • "IT is expensive”
It hit me that a new paradigm was needed.  The way we measured maturity needed to change.  ITSM (instead of ITIL) was becoming a term used by industry visionaries.  Along with that came discussions around Value of IT.  The writing was on the wall.  Business leaders across the country were starting to question what they were actually getting out of the seeming money pit of IT.  Maturity needed to be measured and demonstrated from the perspective of value.  Immature IT led to negative business outcomes.  We need to be net outcome neutral or (hopefully) net outcome positive.  The new way to show maturity looked a lot more at business outcomes.

This is something much closer to what the business was expecting from IT.  How much is IT support impacting overall productivity?  To what extent were we providing and/or enabling innovation?  How quickly can we change gears towards new technologies?

The new assessment looked more like this:

That's more realistic.  We were excellent technology implementers, and the systems continued to run very well.  We were OK as balancing security with allowing sufficient accessibility.  No one, including IT, was taking responsibility for how well the technology matched current business processes.  The old perspective was that we just implement, it's up to the business to use it well or not.  You get the idea.

I'm not suggesting this as a perfect model of measuring the value of IT maturity.  This is still very rough, and doesn't do a great job at showing the interdependence between the elements.  You are welcome to use it, however.  I recommend that you first determine whether these elements of value are appropriate for your organization.  This works well for me, you may find the need to drop some and add others.

The point is that a maturity assessment is only helpful when it helps define the value of the relevant processes to the business.  You could start assigning costs to these value elements, and you'd have a great start towards developing a common understanding of how IT costs lead to business value (or not).  For example, the visual helped me see that we were spending a lot of money and time on security, without really identifying business benefits.  Risk is a much better concept, and is understood much better by business executives.  Maybe our spend on security gives us more risk avoidance than we need.

As I said, this is only a start.  I'd love to hear how others bridge the gap between process maturity and business value.  Please leave your ideas in the comments!

Sunday, March 11, 2012

ITIL Secrets Revealed! Is it an Incident? A Request? A Problem? The definitive answer!

There is no value in arguing the difference between Incidents and Problems. That doesn't mean you shouldn't have a clear distinction, just don't waste time arguing the difference. I mean this quite literally: within your organization, spend no more than 10 minutes discussing where you distinguish between Incidents and Problems. Same thing as the difference between Incidents and Requests.

The outpouring of recent discussions, especially on LinkedIn, with protracted discussions about what makes for the "right" distinction between these types of tickets has gotten out of hand. These discussions are exactly what gives ITIL such a bad name. It also goes a long way to reveal why everyone hates the IT department. Honestly, who decided that ITIL says how we MUST define these tickets? Yes, it provides guidance; but there is nothing magical in that guidance. A much better debate is whether it is important to distinguish between the two at all. Even then, what value is created by debating that for a long time?

Ultimately it is far more important that you decide what the distinction between Incidents, Problems, and Requests should be; and communicate that decision consistently and frequently. When defining the difference between Incidents and Requests, for example, I would look at the answers to three questions, in declining order of importance:

  1. What is best for your customer?

  2. What is best for your business? This is really about the metrics you need in order to know whether you are doing what is best for your customer.

  3. What will create the least amount of confusion for your IT staff?


What are your thoughts and experiences? How do you like to determine where an Incident ends and a Request begins?

Friday, February 17, 2012

The Measure That Matters

Is there any topic more discussed in ITSM circles than measurement? Between SLAs, response times, MTTR, average hold time, abandon rate, etc., we are absolutely bombarded with measurement "best practices." I have nothing against these measures, it's just that they don't matter.

Maybe that was a little flippant, since many of these measurements can help guide us toward better service delivery and customer service. My concern is that they don't help us actually determine whether we are doing better service delivery or customer service. Because measures like performance against SLA are relatively easy to measure, we often make the mistake of assuming a higher rate of SLA adherence means better customer service. SLA adherence has no direct correlation to customer service outcomes. It might help us achieve better customer service, and it might not. If you disagree, think of the last time you diffused a tense customer situation with SLA adherence metrics, or your amazingly low MTTR.

I'll wait. Take your time.

That's what I thought. Never. Na.Da.

The only measure that matters is the one that measures the gap between your customer's expectation, and their perception of the service they received. Yes, I talking (again) about my favorite model, the modifed SERVQUAL model.



You can read my discussion of the model here.

In my version of the model, we're talking about Gap 7 - the gap between the customer's expectation and the actual service delivered. For service providers, it is the only thing that matters. SLAs, service catalogs, continual/continuous service improvement, MTTR, abandon rates, availability, change success rates, et al, mean absolutely nothing if they don't impact the customer's perception of the service they received. This is what has earned IT a bad reputation the past several years. We're still focused on five-nines and "on-time on-budget", when our customers simply want it to do what they expect it to do, when they want it. I cringe whenever I hear someone start talking about project success rates, and then hear them talk about staying on budget. OK, great. But did the project outcomes meet or exceed original expectations?

The old inward-focused measures are still very important. They can help us determine which elements of IT's service delivery help or hurt us meet the customer's expectations. I just want those measures to stay inside the IT department.

Going back to Gap 7, how can we measure that gap? It's not necessarily a nice tidy number we can measure with precision. To help in this endeavor, I suggest Douglas Hubbard's book How To Measure Anything, and the accompanying website. It's a great book for data nerds like me. One enduring concept for me is the idea that measurement is not about absolute certainty. The purpose of measurement is simply to reduce uncertainty. How much certainty do we need in order to make decisions based on the measurement of Gap 7? Do we need more staff? Should we revamp of prioritization standards? Should we re-focus our communications plan? These are not decisions requiring highly precise measures. We won't get much value from distinguishing between a customer satisfaction score of 3.2173 and 3.2201, on a 4-point scale; but that distinction would certainly matter to a machinist when measuring centimeters of variance in a given part.

The point is that it doesn't  really matter exactly what you measure, or whether you can measure it with a high degree of certainty. Find something that makes a reasonable stand-in for Gap 7, and start measuring. It's far more important to be consistent in how you measure, than it is to be precise in your measurement. For example, I've started measuring Gap 7 in support services by sending a closed ticket survey, asking the user to rate the level of quality and timeliness they experienced. It's not perfect, but it's something all staff can point to when determining relative variance between customer expectations and service delivery perceptions. A higher number on the 4-point scale means a smaller variance in Gap 7. Now we can start measuring whether an increase in first call resolution causes an increase in survey scores. Measuring first call resolution, with no context as to the impact on Gap 7, gets us nowhere.

What do you use to measure the gap between customer expectations and their perception of the service received? I'd love to hear your ideas in the comments. If you're not currently measuring that gap, what could you use to start measuring? What works (and doesn't work) in your environment?