Sunday, April 6, 2014

Wherefore art thou, service desk?

This started as a reply to a blog written by ITSM consultant extraordinaire Barclay Rae. Go back and read that if you get the chance. My thoughts are intended to compliment what Barclay shared  and expand in an area that's had a lot of my attention lately. Where has the Service desk gone, what have we done to it, and how do we give it its proper place in ITSM?

We've worked so hard to define ITSM and ITSM software as "more than (just) the help desk" that we started to believe ourselves that the service desk doesn't matter. We (ITSM industry) have such a strong inferiority complex that we needed to show everyone else, the business and the rest of IT, that what we do is more than logging calls and managing trouble tickets. Along the way, we've forgotten that the people we put in between the rest of IT and the business actually matter. Instead we looked for easier and cheaper ways to do that function. This has given rise to:
  1. Outsourcing the service desk function entirely, and/or
  2. Scaling back on the quantity and quality of people we use to staff internal service desks
Over the past year I've come across more and more organizations downplaying the importance of the service desk. Even when facilitating Incident Management workshops, the service desk is barely mentioned. The service desk manager may even be present, but he/she recognizes that the people taking the calls are virtually irrelevant.

We've justified the easier, cheaper goal based on research  that tends to imply that customers don't want to talk to us anyway. The service desk is an anachronism of days long past when customers had no other options.

I'd argue that internal technology support is significantly different than most transactional customer support; but I'm willing to set that aside for now. Let's go with the assumption that most customers of internal IT services prefer not to interact with your staff. So customers try to resolve issues on their own. What happens when they can't? They've googled the issue, searched through your knowledge base, asked coworkers for help, and the issue remains. It's clear that by this point their issue is not a simple password reset that self service can easily solve.

Now they need to call in professional help, so who do we have them call? Outsourced staff armed only with scripts for common issues, or internal staff of whom we expect little more than being able to transcribe the issue into a ticket. These are the people you want representing the face of your services to your business?

Yes, we're probably right that automating simple service transactions are a good idea. But that doesn't mean we can scale back on the people who do engage with our customers. What it means is that the people who do call for help can be divided into two groups:
  • Those with more complex issues that aren't easily resolved by standard repeatable steps, OR
  • Those who prefer not to use self help, and need more hand holding
We need people taking those calls to have a broad technical base, so they understand how to triage and diagnose issues that may have multiple causes across technology AND business disciplines. They must understand how our specific business works. And we need people with the skills to empathize and listen, who actually enjoy helping less tech savvy customers achieve results.

Instead, we give our customers the polar opposite of what they need. We give them outsourced staff who know nothing about our business, and staff with minimal technical breadth.

The old model was to staff the Service desk with entry-level systems analysts. The current model staffs the Service desk with the cheapest resources we can find. I propose that what is needed going forward are service desks staffed with junior business analysts or junior relationship managers. Let's use people who are focused on technology breadth and customer outcomes.

Crazy idea, huh?

Friday, February 14, 2014

ITSM Cannot Live on Process Alone

I've received great feedback on my article, "Process Improvement is not Service Improvement". I was in the process of responding to a comment, when I realized the response probably needed its own article.

Process improvement is frequently, maybe even almost always, a component of service improvement. What I'm saying is that it's a bad idea to use process CSF's and KPIs as the desired outcome of an improvement project. I've come into client projects where the goal was something akin to "improve Problem Management to CMMI level 3". That's a noble purpose. The conversation might look like this.
Me:  What is your project goal?
Client:  To improve the services we provide to the business. 
Me:  Why do you need to improve services? Is there a specific business driver? 
Client:  Reach maturity level 3 in Problem Management. 
Me:  OK, how will you know when you've reached it? 
Client:  We have some internal targets set. Once we've reached those, we'll have an outside audit done. 
Me:  Great! What do you hope to achieve by doing this? 
Client:  I told you. CMMI level 3. 
Me:  No, what I mean is, what is the business driver causing you to do this now?
Client:  The CIO talked about SOX compliance. There was a finding in our internal audit that needs to be addressed.  
Me:  OK, so the business driver is the remediation of a SOX audit finding? 
Client:  Yes ... (Sigh) ... but our outcome is to reach CMMI level 3.
Then we discuss how maturity level 3 may have nothing to do with addressing SOX compliance. The client agrees with that, but says executives determined that CMMI level 3 would provide what they needed to remove the audit finding.

OK, now we're getting somewhere. It turns out that HOW achieving level 3 remediates the audit finding was never shared with the project team. All they know is that they need to achieve level 3 to remediate an audit finding.

Does having a process success factor, reaching Problem Management level 3 maturity, inherently improve the service provided to the business? No. There's an excellent chance it will even increase the cost of providing services. If we're going to increase costs, there better be an inherently clear business purpose for doing so, and that purpose should clearly provide more benefit than the added costs.

How many different process changes could you implement in order to achieve CMMI level 3? What does it mean to achieve level 3? How do we know that the process changes put in place in order to reach level 3 will actually resolve the audit finding? It's possible that your process changes help you achieve level 3, but do not address the audit finding. This is a recipe for disaster. The "service improvement" effort is based on a process CSF, which was selected in order to meet a compliance issue, and we know that making the necessary process changes may not even resolve the compliance issue!

This isn't a unique example. Plug in terms like "reduce incidents", "improve SLA achievement %" or just about any other process based metric you want.

There is no direct correlation between achievement of the process goal and improvement of services provided to the business.


You might end up improving services, but you could just as easily increase costs of providing services with no business-visible improvement in those services.

Process improvement is almost always part of service improvement. They compliment each other very well, but they are not the same thing. Before embarking on any sort of service improvement program, whether it is continual or one-time, make sure the desired business value is clearly defined before you start defining any process CSFs or KPIs.

Failure to do so dooms your program before it even starts.

Saturday, February 8, 2014

Process Improvement is not Service Improvement

What is the statute of limitations on ITSM transgressions? I hope they have long past, since I am now confessing some of my past sins.

I used "process improvement" interchangeably with "service improvement".

There. I've said it on the Internet, where everything is indisputable fact. Good to unburden myself like that. It's like a good cleanse.

I'm amazed by how often I come across CSI (Continuous Service Improvement) efforts that list things like fewer incidents or process efficiency as the primary goals. But again, I once did the same thing. Much ITIL and process maturity direction tries to sell the idea that "better" process is all we need to do to reach better service offerings. Process metrics like first contact resolution, mean time to restore, and self service growth are constantly presented as THE way to measure success in IT service management. Of course there are outliers who offer different measures, but the fact that they are outliers speaks volumes.

Here's the confusing part: We are providing a service that is also a product. Customer interactions with individuals delivering the service are part of that service. We call those interactions "providing service" or "customer service". This word service is used all over ITSM, so it's easy to confuse the service product with the activity of customer service.

Let's be clear. Incident management is not a service. It controls a process or series of activities done in order to restore a service to normal working state. Measures of incident management or customer service have no direct correlation to the willingness of customers to consume your service-product. Those measures may have an indirect connection. Please allow me to paraphrase Deming:

You can have great customer service and zero service customers.

(Or in ITIL speak: You can have great service processes and zero service customers.)

This is also true: You can have lots of service customers and lousy customer service (or lousy service processes).

While service process quality is certainly a variable of overall service quality, they are not synonymous. More or better process definition does not mean better service. More process can even lead to degraded overall service-product. (See "Is your IT Team and Budget a Victim of Process Over-Engineering?")
It's well documented that I am not a fan of First Contact Resolution as a success metric. I'll take it a step further and say that ITIL process metrics should never be used to measure service success. Process improvement does not mean service improvement. Service Request process improvement to CMMI level 4 doesn't mean you are delivering an improved service. It can be a tactic used to reach improved service, but it can just as easily end up being an expensive boondoggle that does nothing to improve your service-product.

Service Improvement is about value. Do my customers feel they receive sufficient benefit from their cost investment? Do they like doing business with me?

I've come across ITSM practitioners where their CIO had set a goal of reducing incidents by 2%. And that's their CSI initiative. What the heck does fewer incidents have to do with service value? Enough!

I've come clean. Are you ready to as well?

Edit: I've posted a follow up article to address some questions.

Tuesday, December 31, 2013

The ITSM Value Proposition is Incomplete


Business value of IT services is often defined as a simple formula.
Value = Efficiency + Effectiveness
ITIL puts it this way.
Value = Utility + Warranty

Over the past few years I've come to believe that we are missing a key ingredient: Customer Experience. Customer service is not the same as customer experience, although they are related. Customer service is interested in making the customer as happy as possible, after a service impacting event has occurred. Customer experience is interested in how a consumer of a service interacts with the service through the life cycle of a service event.

If I order a laptop for a new employee, customer service wants me to feel good about the outcome of the request. Customer experience is concerned about how I go about making the request, and any interactions I have during the process. Can I check the status through a portal? How easy or difficult is that to do? Was I able to find what I wanted quickly and easily? Was the actual request easy, and comprehensive enough for me to feel comfortable that I will receive value for what I paid?

A recent experience got me thinking about the relationship between service efficiency, effectiveness, and customer experience.

My family ran into some problems at dinner several weeks ago. Two of our three entrees were wrong to the point of needing to have them sent back and re-made. The third entree was prepared below expectation, but not to the point where my wife was willing to send it back and wait. Considering the moderately upscale restaurant context, I expected better accuracy in the orders, and faster turnaround when the two entrees were re-made. The server did an admirable job in taking care of us after the errors, and she did not throw anyone else under the bus -- a good lesson for all service practitioners.

Then the bill came. It itemized our entire meal, including making the two incorrect entrees complimentary, or so I thought. It turns out that the two incorrect items were on the bill twice each. The comp entrees appeared a little further down. I asked the server, since it looked like they intended to comp the two entrees, but accidentally added them back in a second time. The server thought it was weird, too, and went to check with her manager. The manager stopped by (a loooooong time later) and explained that the bill was right. For inventory purposes they added each item that had been prepared, and simply removed the cost of the items we had returned. In other words, no comps. We simply didn't have to pay for the incorrect entrees that we returned. To keep the inventory accurate, they needed to add each item to the bill, subtract the returned items, and then add back in items we accepted.

To be fair, the manager did end up making our appetizer complimentary; but it made me think about efficiency, effectiveness, and customer experience.

As a customer, I would have been OK if the bill simply included the items we kept. Why did my bill need to reflect the items prepared but sent back and comped? The answer was inventory accuracy, but was that a good answer? Including the returned items was good for service efficiency. Depending on how the restaurant sees service effectiveness, it could be considered good or bad effectiveness. Customer experience, however, was negatively impacted by using this method of keeping inventory straight. I wouldn't have thought twice about it if the bill simply itemized the food that was accepted. My expectations changed when I saw that some items on my bill were comped. At that point I started to wonder why the entrees weren't comped or discounted.

Did the restaurant do themselves a disservice by making something irrelevant to the customer (accurate inventory levels) so clearly visible to the customer? This was a minor issue to me, but I wonder how often IT service providers do something similar in the name of process efficiency? Do we properly consider customer experience while designing and delivering services? I believe the answer is frequently "No".

This can be as simple as how we respond to a customer status inquiry. Which response provides a better customer experience?

  • Let me look up your tickets. I see that Bob updated the request 2 days ago, but I'm not sure what's happening now. He sometimes forgets to keep tickets updated, so I'll have him send you an update.
  • Let me check into this and get back to you. When is the best time to call you back?

It can also be more ingrained into service design. I come across many clients that ask their customer to fill in numerous, possibly confusing, fields looking for specific details during an initial request. It certainly is more efficient, and you could argue about effectiveness as well. I doubt, however, that it adds to a positive customer experience.

I once overheard an IT staffer comment, "We need to get these people to understand how to make a request we can work with". One goal the team identified was to reduce the number of incoming requests. Seriously. During the discussion it became clear that what they really wanted was to increase the value of each request. It was simply the difference between looking at it from an IT efficiency perspective versus looking at it from a business value perspective.

Business value of IT service is no longer just about utility and warranty. Customer experience is a crucial component of value. Nobody outside of IT is excited by the reduced hardware costs of virtualization, while the overall cost of IT continues to grow. That's no better than subtracting a line item from my bill, and then adding it back in a few lines later. There might be a good explanation, but the business customer doesn't care unless it adds to their experience.

Friday, October 4, 2013

What makes for a compelling metrics story?

This article is cross posted at The ITSM Review.

In my first article “Do your metrics tell a story?” I discussed the “traditional” approach to reporting metrics, and why that approach is ineffective at driving action or decisions.

Personal observations are far more effective. Personal observations appearing to conflict with the data presented can actually strengthen opposition to whatever decision or action the data suggests. Presenting data as part of a story reboots the way we receive data. Done well, it creates an experience very similar to personal observation.

So how can we do this well? What makes a compelling metrics story?

Every element must lead to a singular goal

This cannot be stressed enough. Any metrics story we tell must have a singular purpose, and every element of the package must exist only to achieve that purpose. Look at any report package you produce or consume. Is there a single purpose for the report? Does every piece of information support that single purpose? Does the audience for the report know the singular purpose? If the answer to any of these questions is no, then there is no good reason to invest time in reading it.

ITSM legend Malcolm Fry provides an excellent example of the singular goal approach with his “Power of Metrics” workshops. If you haven’t been able to attend one of his metrics workshops, you are truly missing out. I had the honor when Fry’s metrics tour came through Minneapolis in August 2012. The most powerful takeaway (of many) was the importance of having a singular focus in metrics reporting.

In the workshop, Fry uses a “Good day / Bad day” determination as the singular focus of metrics reporting. ThoughtRock recorded an interview with him that provides a good background of his perspective and the “Good day / Bad day” concept for metrics. The metrics he proposed all roll up into the determination of whether IT had a good day, or a bad day. You can’t get clearer and more singular than that. The theme is understood by everyone: IT staff, business leaders … all the stakeholders.

There are mountains of CSF/KPI information on the Internet and organizations become easily overwhelmed by all the data, trying to decide which CSFs and KPIs to use. Fry takes the existing CSF and KPI concepts and adds a layer on top of CSFs. He calls the new layer “Service Focal Point”.
The Service Focal Point (SFP) provides a single measurement, based on data collected through KPIs. Good day, bad day is just one example of using SFPs. We only need to capture the KPIs relevant to determining the SFP.
(Fry also recently recorded a webinar: Service Desk Metrics — Are We Having a Good Day or a Bad Day? Sign up, or review the recording if you are reading this after the live date).

Create a shared experience

A good metrics story creates a new experience. Earlier I wrote about how personal histories – personal experiences – are stronger than statistics, logic, and objective data in forming opinions and perspectives. Stories act as proxies for personal experiences. Where personal experiences don’t exist, stories can affect opinions and perspectives. Where personal experience does exist, stories can create additional “experiences” to help others see things in a new way.

If the CIO walks by the service desk, and sometimes observes them chatting socially, her experience may lead to a conclusion that the service desk isn’t working hard enough (overstaffed, poorly engaged, etc.) Giving her data demonstrating high first contact resolution and short caller hold times won’t do much to change the negative perception. Instead, make the metrics a story about reduced costs and improved customer engagement.

A great story creates a shared experience by allowing us to experience similarities between ourselves and others. One of the most powerful ways to create a shared experience is by being consistent in what we report and how we report it. At one point in my practitioner career I changed metrics constantly. My logic was that I just needed to find the right measurement to connect with my stakeholders. It created the exact opposite outcome: My reports became less and less relevant.

The singular goal must remain consistent from reporting period to reporting period. For example, you may tweak the calculations that lead to a Good day / Bad day outcome, but the “storyline” (was it a good day or a bad day?) remains the same. We now have a shared experience and storyline. Everyone knows what to look for each day.

Use whatever storyline(s) works for your organization. Fry’s Good day / Bad day example is just one way to look at it. The point is making a consistent story.

Make the stakeholders care

A story contains an implied promise that the story will lead me somewhere worth my time. To put it simply, the punch line – the outcome – must be compelling to the stakeholders. There are few experiences worse than listening to a rambling story that ends up going nowhere. How quickly does the storyteller lose credibility as a storyteller? Immediately! The same thing happens with metrics. If I have to wade through a report only to find that there is ultimately nothing compelling to me, I’ll never pay attention to it again. You’ll need to work pretty hard to get my attention in the future.

This goes back to the dreaded Intro to Public Speaking class most US college students are required to take. When I taught that class, the two things I stressed more than anything was:
  • Know your audience
  • Make your topic relevant to them
If the CIO is your primary audience, she’s not going to care about average call wait times unless someone from the C-suite complained. Chances are good, however, that she will care about how much money is spent per incident, or the savings due to risk mitigation.

Know your ending before figuring out the middle of the story

This doesn’t mean you need to pre-determine your desired outcome and make the metrics fit. It means you need to know what decisions should be made as a result of the metrics presentation before diving into the measurement.

Here are just a few examples of “knowing the ending” in the ITSM context:
  • Do we need more service desk staff?
  • How should we utilize any new headcount?
  • Will the proposed process changes enable greater margins?
  • Are we on track to meet annual goals?
  • Did something happen yesterday that we need to address?
  • How will we know whether initiative XYZ is successful?

A practical example

Where should we focus Continual Service Improvement (CSI) efforts? The problem with many CSI efforts is that they end up being about process improvement, not service improvement. We spend far too much time on siloed process improvement, calling it service improvement.

For example, how often do you see measurement efforts around incident resolution time? How does that indicate service improvement by itself? Does the business care about the timeliness of incident resolution? Yes, but only in the context of productivity, and thereby cost, loss or savings.

A better approach is to look at the kind of incidents that cause the greatest productivity loss. This can tell us where to spend our service improvement time.

The story we want to tell is, “Are we providing business value?”

The metric could be a rating of each service, based on multiple factors, including: productivity lost due to incidents; the cost of incidents escalated to level 2 & 3 support; number of change requests opened for the service; and the overall business value of the service.

Don’t get hung up on the actual formula. The point is how we move the focus of ITSM metrics away from siloed numbers that mean nothing on their own, to information that tells a compelling story.

If you would like guidance on coming up with valid calculations for your stories, I highly recommend “How to Measure Anything: Finding the Value of Intangibles in Business” by Douglas Hubbard.
… and a few more excellent resources:

Do your metrics tell a story?

This article was cross posted at The ITSM Review and KPI Library Expert blogs.

Do your service management metrics tell a story? No? No wonder nobody reads them.

That was a tweet I posted a few weeks ago, and it’s had some resonance. I know that during my practitioner days, I missed many opportunities to tell a compelling story. I wanted everyone else to get the message I was trying to communicate, and couldn’t figure out why my metrics weren’t being acted upon. I had a communications background before getting into IT, so I should have known better.

Facts are not the only type of data

I’ve blogged about metrics a few times before. In “Lies, Damned Lies, and Statistics: 7 Ways to Improve Reception of Your Data” I shared a story about how my metrics had gone astray. I was trying to make a point to reinforce my perspective on an important management decision. In what became a fairly heated meeting, I found myself saying at least three different times, “the data shows…” Why wasn’t it resonating? Why was I repeating the same message and expecting a different result?
Go back and read that article to see how it resolved. The short answer: I lost.

I’d love to live in a world where only objective, factual data is considered when making decisions or influencing others; but we have to recognize two important realities:


  1. Other types of data, especially personal historical observations that often create biases, are more powerful than objective data ever could be.
  2. Your “objective” factual data can actually reduce your credibility, if it is inconsistent with the listener’s personal observations. As the information age moves from infancy into adolescence, we are becoming less trusting of numbers, not more.
So, giving reasons to change someone’s mind is not only ineffective, it can also make things worse. Psychological research indicates that providing facts to change opinion can cement opposing opinion more deeply than before.
Information, whether accurate or not, can be found that backs up almost any perspective. Why should I trust your data any more than the data I already have? Read the comments section from almost any news story about a controversial subject. How many minds get changed?


We need a reason to care



Why should I pay attention to, act on, or react to, your metrics if there is no compelling reason for me to do so? We have to give our audience a reason to care. We want the audience of ITSM metrics to do something as a result of the metrics. The metrics should tell a story that is compelling to your intended audience.

Let’s look at a fairly common metric – changes resulting in incidents. Frequently we look at the percentage of changes that generated major incidents (or any incidents at all). Standing alone, what does this metric say? Maybe it shows a trend of the percentage going up or down over time. Even so, what action or decision should be made as a result of that data? Without context we can look for several responses:


  • Service Desk Manager: “Changes are going in without proper vetting and testing.” 
  • Application Development Manager: “We need to figure out why the service desk is creating so many incidents.” 
  • IT Operations Director: “Who is responsible for this?” 
  • CIO: “zzzzzzzz” 
Who has the appropriate response? The CIO of course (and not just because she’s the boss)! The reality is that the metric means nothing at all. Which is kind of sad really, since there may actually be something to address.

Maybe the CIO will initiate some sort of action, but not until she hears a compelling story to accompany the metric.If the metric itself doesn’t tell the story, decisions will be made based on the most compelling anecdote, whether or not it is supported by the metric.

Metrics need to tell a story

At a new job around 15 years ago, I inherited a report that had both weekly (internal IT) and monthly (business leadership) versions. Since the report was already being run, I assumed it must be useful and used. The report consisted of the standard ITSM metrics:
  • number of calls opened last month vs. historical 
  • incident response rate by team and priority 
  • incident resolution rate by team and priority 
  • highest volume of incidents by service 
  • etc. 
However after a few months I realized that nobody paid attention to these reports, which surprised me. According to ITIL these are all good metrics to pull. I saw useful things in the data, and even made some adjustments to support operations as a result. However, my adjustments were limited in scope, and the improvements I saw initially didn’t hold and so everyone simply went back to the “old ways”. The Help Desk team that reported to me did experience a sustained significant improvement in their first contact resolution rate, but all other areas of support saw nothing but modest improvements over time.

The fact is that the reports didn’t tell a compelling story. There were other factors as well, but looking back now I can see that the lack of a consistently compelling metrics story held us back from achieving the transformation for which we were looking.

So your metrics need to tell a story, but how?

The traditional ITSM approach to presenting data does a poor job at changing minds or driving action, and it can actually strengthen opposing perspectives. Can you think of an example where presenting numbers drove a significant decision? Most likely, the numbers had a narrative that was compelling to the decision maker. It could be something like, “our licensing spend will decrease by 25% over the next three years, and 10% every year after.” That would be a pretty compelling story for a CFO decision maker.

In my next article, we’ll look at how metrics can tell a compelling story.

Monday, September 2, 2013

First Contact Resolution is the last refuge of a scoundrel

"Patriotism is the last refuge of a scoundrel"
- Samuel Johnson, April 1775
This quote came to mind after reading a comment on another ITSM blog. The comment indicated that the core service desk metrics needed for senior management were first contact resolution (FCR), mean time to repair/resolve (MTTR), SLA compliance, and customer satisfaction. This was just a given. I come across that and similar perspectives frequently through client interactions and other online discussions, so I assume this perspective remains fairly mainstream.

In his famous quote, Samuel Johnson decried the use of insincere patriotism, especially as a means to sway public opinion or change parliamentary votes. The end may occasionally justify the means (One could argue that the unbridled use of "patriotic" messaging in the USA during World War II to raise money and strengthen support for the war effort was a critical element in defeating Hitler and his allies. I don't bring that up to spark a debate over that justification -- only that this is a fairly mainstream opinion here in the States). On the other hand, everyone reading this article can likely come up with MANY examples of governments using patriotism as a means to justify horrific outcomes. Fill in your own examples.

The point I wish to make is that Johnson wasn't denouncing patriotism. He was denouncing patriotism as the means to an end. This is where we come back to ITSM. Can we really use MTTR or FCR as a means to determine the quality of service, and more importantly, the value to our business? Don't assume a great FCR means the service desk is maximizing their value to the business. What if the resolutions they provide are nothing but band-aids for an open wound? One of my favorite Dilbert cartoons is one with Dogbert doing tech support. Dogbert interrupts the caller, saying "Shut up and reboot" and then "Shut up and hang up." The outcome is the much revered improvement in average call handle time. The reason it resonates with me is that it so closely matches reality. How often do we promote metrics that, while well intentioned, actually encourage less than optimal behavior?

Of course relating Johnson's quote to ITSM metrics is an exaggeration. Most ITSM metrics gathering exercises are well intentioned. Many of the mainstream ITSM metrics are popular as a response to perceptions of lousy service from internal IT. It really mattered that we showed improved responsiveness to incidents, because that's what the rest of the business saw everyday. Today, however, our business partners expect more from IT, and they want it to cost less. We have to be very careful how and where we spend our resources. Reliance on traditional operating metrics may temporarily improve morale in the "troops", but it can severely hurt where the business really needs IT. If you need to choose how to use your resources, should you put your efforts into getting SLA compliance over 90%, or going live with Marketing's new campaign on time with little-to-no errors? Let's put it another way: what is the ROI of each option?

IT people. myself included, can act like attention-starved puppies sometimes. We'll do anything to get immediate positive feedback, regardless of the consequences. It feels so good to be the hero that gets the PC working for a coworker, forgetting that the new customer campaign depends on that kiosk you're developing being ready for UAT by the end of the day.

Using MTTR or FCR to show executives how well IT performs is certainly not the act of a scoundrel, but we can be easily fooled into behaving like it. Look at what you measure and the targets you set. Do you know the ROI of reaching those targets? You may be surprised at the result.