Friday, October 4, 2013

What makes for a compelling metrics story?

This article is cross posted at The ITSM Review.

In my first article “Do your metrics tell a story?” I discussed the “traditional” approach to reporting metrics, and why that approach is ineffective at driving action or decisions.

Personal observations are far more effective. Personal observations appearing to conflict with the data presented can actually strengthen opposition to whatever decision or action the data suggests. Presenting data as part of a story reboots the way we receive data. Done well, it creates an experience very similar to personal observation.

So how can we do this well? What makes a compelling metrics story?

Every element must lead to a singular goal

This cannot be stressed enough. Any metrics story we tell must have a singular purpose, and every element of the package must exist only to achieve that purpose. Look at any report package you produce or consume. Is there a single purpose for the report? Does every piece of information support that single purpose? Does the audience for the report know the singular purpose? If the answer to any of these questions is no, then there is no good reason to invest time in reading it.

ITSM legend Malcolm Fry provides an excellent example of the singular goal approach with his “Power of Metrics” workshops. If you haven’t been able to attend one of his metrics workshops, you are truly missing out. I had the honor when Fry’s metrics tour came through Minneapolis in August 2012. The most powerful takeaway (of many) was the importance of having a singular focus in metrics reporting.

In the workshop, Fry uses a “Good day / Bad day” determination as the singular focus of metrics reporting. ThoughtRock recorded an interview with him that provides a good background of his perspective and the “Good day / Bad day” concept for metrics. The metrics he proposed all roll up into the determination of whether IT had a good day, or a bad day. You can’t get clearer and more singular than that. The theme is understood by everyone: IT staff, business leaders … all the stakeholders.

There are mountains of CSF/KPI information on the Internet and organizations become easily overwhelmed by all the data, trying to decide which CSFs and KPIs to use. Fry takes the existing CSF and KPI concepts and adds a layer on top of CSFs. He calls the new layer “Service Focal Point”.
The Service Focal Point (SFP) provides a single measurement, based on data collected through KPIs. Good day, bad day is just one example of using SFPs. We only need to capture the KPIs relevant to determining the SFP.
(Fry also recently recorded a webinar: Service Desk Metrics — Are We Having a Good Day or a Bad Day? Sign up, or review the recording if you are reading this after the live date).

Create a shared experience

A good metrics story creates a new experience. Earlier I wrote about how personal histories – personal experiences – are stronger than statistics, logic, and objective data in forming opinions and perspectives. Stories act as proxies for personal experiences. Where personal experiences don’t exist, stories can affect opinions and perspectives. Where personal experience does exist, stories can create additional “experiences” to help others see things in a new way.

If the CIO walks by the service desk, and sometimes observes them chatting socially, her experience may lead to a conclusion that the service desk isn’t working hard enough (overstaffed, poorly engaged, etc.) Giving her data demonstrating high first contact resolution and short caller hold times won’t do much to change the negative perception. Instead, make the metrics a story about reduced costs and improved customer engagement.

A great story creates a shared experience by allowing us to experience similarities between ourselves and others. One of the most powerful ways to create a shared experience is by being consistent in what we report and how we report it. At one point in my practitioner career I changed metrics constantly. My logic was that I just needed to find the right measurement to connect with my stakeholders. It created the exact opposite outcome: My reports became less and less relevant.

The singular goal must remain consistent from reporting period to reporting period. For example, you may tweak the calculations that lead to a Good day / Bad day outcome, but the “storyline” (was it a good day or a bad day?) remains the same. We now have a shared experience and storyline. Everyone knows what to look for each day.

Use whatever storyline(s) works for your organization. Fry’s Good day / Bad day example is just one way to look at it. The point is making a consistent story.

Make the stakeholders care

A story contains an implied promise that the story will lead me somewhere worth my time. To put it simply, the punch line – the outcome – must be compelling to the stakeholders. There are few experiences worse than listening to a rambling story that ends up going nowhere. How quickly does the storyteller lose credibility as a storyteller? Immediately! The same thing happens with metrics. If I have to wade through a report only to find that there is ultimately nothing compelling to me, I’ll never pay attention to it again. You’ll need to work pretty hard to get my attention in the future.

This goes back to the dreaded Intro to Public Speaking class most US college students are required to take. When I taught that class, the two things I stressed more than anything was:
  • Know your audience
  • Make your topic relevant to them
If the CIO is your primary audience, she’s not going to care about average call wait times unless someone from the C-suite complained. Chances are good, however, that she will care about how much money is spent per incident, or the savings due to risk mitigation.

Know your ending before figuring out the middle of the story

This doesn’t mean you need to pre-determine your desired outcome and make the metrics fit. It means you need to know what decisions should be made as a result of the metrics presentation before diving into the measurement.

Here are just a few examples of “knowing the ending” in the ITSM context:
  • Do we need more service desk staff?
  • How should we utilize any new headcount?
  • Will the proposed process changes enable greater margins?
  • Are we on track to meet annual goals?
  • Did something happen yesterday that we need to address?
  • How will we know whether initiative XYZ is successful?

A practical example

Where should we focus Continual Service Improvement (CSI) efforts? The problem with many CSI efforts is that they end up being about process improvement, not service improvement. We spend far too much time on siloed process improvement, calling it service improvement.

For example, how often do you see measurement efforts around incident resolution time? How does that indicate service improvement by itself? Does the business care about the timeliness of incident resolution? Yes, but only in the context of productivity, and thereby cost, loss or savings.

A better approach is to look at the kind of incidents that cause the greatest productivity loss. This can tell us where to spend our service improvement time.

The story we want to tell is, “Are we providing business value?”

The metric could be a rating of each service, based on multiple factors, including: productivity lost due to incidents; the cost of incidents escalated to level 2 & 3 support; number of change requests opened for the service; and the overall business value of the service.

Don’t get hung up on the actual formula. The point is how we move the focus of ITSM metrics away from siloed numbers that mean nothing on their own, to information that tells a compelling story.

If you would like guidance on coming up with valid calculations for your stories, I highly recommend “How to Measure Anything: Finding the Value of Intangibles in Business” by Douglas Hubbard.
… and a few more excellent resources:

Do your metrics tell a story?

This article was cross posted at The ITSM Review and KPI Library Expert blogs.

Do your service management metrics tell a story? No? No wonder nobody reads them.

That was a tweet I posted a few weeks ago, and it’s had some resonance. I know that during my practitioner days, I missed many opportunities to tell a compelling story. I wanted everyone else to get the message I was trying to communicate, and couldn’t figure out why my metrics weren’t being acted upon. I had a communications background before getting into IT, so I should have known better.

Facts are not the only type of data

I’ve blogged about metrics a few times before. In “Lies, Damned Lies, and Statistics: 7 Ways to Improve Reception of Your Data” I shared a story about how my metrics had gone astray. I was trying to make a point to reinforce my perspective on an important management decision. In what became a fairly heated meeting, I found myself saying at least three different times, “the data shows…” Why wasn’t it resonating? Why was I repeating the same message and expecting a different result?
Go back and read that article to see how it resolved. The short answer: I lost.

I’d love to live in a world where only objective, factual data is considered when making decisions or influencing others; but we have to recognize two important realities:


  1. Other types of data, especially personal historical observations that often create biases, are more powerful than objective data ever could be.
  2. Your “objective” factual data can actually reduce your credibility, if it is inconsistent with the listener’s personal observations. As the information age moves from infancy into adolescence, we are becoming less trusting of numbers, not more.
So, giving reasons to change someone’s mind is not only ineffective, it can also make things worse. Psychological research indicates that providing facts to change opinion can cement opposing opinion more deeply than before.
Information, whether accurate or not, can be found that backs up almost any perspective. Why should I trust your data any more than the data I already have? Read the comments section from almost any news story about a controversial subject. How many minds get changed?


We need a reason to care



Why should I pay attention to, act on, or react to, your metrics if there is no compelling reason for me to do so? We have to give our audience a reason to care. We want the audience of ITSM metrics to do something as a result of the metrics. The metrics should tell a story that is compelling to your intended audience.

Let’s look at a fairly common metric – changes resulting in incidents. Frequently we look at the percentage of changes that generated major incidents (or any incidents at all). Standing alone, what does this metric say? Maybe it shows a trend of the percentage going up or down over time. Even so, what action or decision should be made as a result of that data? Without context we can look for several responses:


  • Service Desk Manager: “Changes are going in without proper vetting and testing.” 
  • Application Development Manager: “We need to figure out why the service desk is creating so many incidents.” 
  • IT Operations Director: “Who is responsible for this?” 
  • CIO: “zzzzzzzz” 
Who has the appropriate response? The CIO of course (and not just because she’s the boss)! The reality is that the metric means nothing at all. Which is kind of sad really, since there may actually be something to address.

Maybe the CIO will initiate some sort of action, but not until she hears a compelling story to accompany the metric.If the metric itself doesn’t tell the story, decisions will be made based on the most compelling anecdote, whether or not it is supported by the metric.

Metrics need to tell a story

At a new job around 15 years ago, I inherited a report that had both weekly (internal IT) and monthly (business leadership) versions. Since the report was already being run, I assumed it must be useful and used. The report consisted of the standard ITSM metrics:
  • number of calls opened last month vs. historical 
  • incident response rate by team and priority 
  • incident resolution rate by team and priority 
  • highest volume of incidents by service 
  • etc. 
However after a few months I realized that nobody paid attention to these reports, which surprised me. According to ITIL these are all good metrics to pull. I saw useful things in the data, and even made some adjustments to support operations as a result. However, my adjustments were limited in scope, and the improvements I saw initially didn’t hold and so everyone simply went back to the “old ways”. The Help Desk team that reported to me did experience a sustained significant improvement in their first contact resolution rate, but all other areas of support saw nothing but modest improvements over time.

The fact is that the reports didn’t tell a compelling story. There were other factors as well, but looking back now I can see that the lack of a consistently compelling metrics story held us back from achieving the transformation for which we were looking.

So your metrics need to tell a story, but how?

The traditional ITSM approach to presenting data does a poor job at changing minds or driving action, and it can actually strengthen opposing perspectives. Can you think of an example where presenting numbers drove a significant decision? Most likely, the numbers had a narrative that was compelling to the decision maker. It could be something like, “our licensing spend will decrease by 25% over the next three years, and 10% every year after.” That would be a pretty compelling story for a CFO decision maker.

In my next article, we’ll look at how metrics can tell a compelling story.