Wednesday, May 28, 2014

Keeping Employees Engaged With ITSM

Employee engagement had been a popular corporate buzzword the past few years. I've been a bit leery of how the term is applied, since it appeared to mean whatever a given organization wanted it to mean. I've seen engagement used to mean productivity (Productivity has decreased, it must mean employees are less engaged). I've also seen where employee satisfaction surveys were used to measure engagement (Employees hate working here. They must not be engaged). Engagement has frequently come to mean the attitude of the employee and how they feel about their direct supervisor.

I just came across an interesting article from Forbes titled, "CEO News Flash: If Your Workers Aren't Engaged, It's Your Own Fault", which gives the most useful context for engagement I've seen. The idea is that humans are intrinsically motivated to be a valued participant in the workplace. The message being that corporate culture is what most frequently squelches that intrinsic motivation, and leaders have the responsibility to reestablish it. The author suggests we start by looking at two key aspects of leadership:
  1. Setting high standards
  2. Creating a culture of recognition

IT in general and many ITSM initiatives in particular can work against these tactics. Who is more recognized for achievement in your organization? The diligent engineer who always plays by the Change process rules, or the maverick who puts out the dramatic IT fire, often created by their own sloppiness? I hope it's the former; but in many organizations I've worked for and with, the latter unintentionally receives the accolades. And don't assume your organization doesn't reward the arsonist/firefighter. Rewards can come in many forms. Some obvious, some not.

What about SLAs? Are they used to measure individual performance in addition to organizational adherence to agreements? Too often they are. We must remember that SLAs are minimally acceptable targets when it comes to individual performance. It is the equivalent of earning a C grade. Meets expectations. If all employees strive to merely meet your SLA targets, that leaves no room for the occasional task that fails to meet minimum expectations. In order to meet organizational SLAs, we need the performance on individual tasks to exceed minimal expectations more often than not. Do your expectations around employee performance reflect a culture of high standards? Look carefully at how you set expectations around individual performance. If they are the same as the standards around organizational performance (ie., SLAs), you may be unintentionally creating a culture of low employee engagement.

Think of it this way. Performance against the standards you set on an individual basis is a key leading indicator of overall organizational performance. Make your standards high, clear, and reachable.

Provide reinforcement. Publicly recognize performance excellence, focusing on the "why" and "how" over the "what." The "what" of recognition just says "well done". "How" takes it a step further and indicates how the performance enables a greater goal or outcome. Most important, "why" personalizes the experience to say "I get why you are good".

Engagement doesn't have to be a nebulous concept. Managed purposefully from the top, it can create tremendous value. At a time when IT departments continue to struggle for the favor of business partners, we need all the employee engagement we can muster. And it starts with you.

Monday, May 19, 2014

The Big 3 Questions of Consequence

I had a great conversation with a new client today.  It was a pre-workshop call to solidify the agenda for our upcoming process re-engineering workshop.  The discussion turned to how we transition from the old processes to new ones.  It's one thing to design a new business process.  It's another thing entirely to put that process into practice in a manner that optimizes the likelihood of success.

In addition to some standard organizational change management suggestions, such as communication, training, etc, I mentioned the importance of identifying consequences associated with the old procedures.  I said something along the lines of, "It's just as important to understand the positive consequences some staff get for behaving inappropriately."  I did't mean legal or ethical inappropriateness, but behavior that is inconsistent with the desired process.  I asked the questions:

  • Do people on your teams ever get praise for putting out day-to-day fires?
  • How often do they receive praise for doing the right thing so that the fire-fighting situation never arises?

The answer to question 1 is frequently "all the time", while the answer to question 2 is frequently "never".

A while back I wrote about some practical considerations for process and culture change.  One part of that post was about consequences for appropriate and inappropriate behavior.  I find that the most overlooked part of organizational change is that of consequences and rewards.  I thought about it some more today, and realized that almost every example I've seen of failed process change did the following three things poorly or not at all.  Conversely, in addition to clear measurable goals, every successful major process/organizational change did each of these thoroughly.

Three Questions of Consequence


  1. When developing new processes, are you including positive consequences for behaving appropriately?
  2. Does your current process have positive consequences for behaving inappropriately?
  3. Does your current process have negative consequences for behaving appropriately?

Let's take a brief look at each question.

When developing new processes, are you including positive consequences for behaving appropriately (and negative consequences for behaving inappropriately)?

This can manifest many ways, but it is critical to include clear expectations of positive and negative behavior.  How do you appraise employees?  Do you include appraisal criteria for process changes? For example, when re-defining change processes, have the relevant employees' job descriptions and/or performance assessment criteria been updated to reflect desired behaviors under the new process?  I am surprised how often this is overlooked, or considered minimally important.  Process folks all too often assume that everyone else gets the importance of process change the same way we do.  At the very least, we assume that process compliance is out of the scope of any process (re-) engineering project.  We must work with the personnel supervisors to ensure that compliance expectations are documented on a role-by-role basis.  An additional benefit is that you can also determine whether the supervisors are on board.

I can't state this strongly enough: Your process re-engineering or improvement project will fail without clear behavior expectations.

Does your current process have positive consequences for behaving inappropriately?

These are the most dangerous consequences, and the most important to uncover.  As you address organizational change, look for the hidden positive and negative consequences embedded in the current process.  Are there a few "star" performers who are always called out for exceptional fire fighting?  If I'm consistently rewarded for my fire fighting efforts, why on earth would I want to help make a transition to a more consistently applied process?  Don't forget that everyone else notices the rewards and accolades given to those who work outside the boundaries of the preferred system.  What makes it even harder is that we're also talking about perceived positive consequences.  Of course I don't intend to reward the system administrator that routinely takes on requests that should go through the service desk.  As a manager, however, I may forget the occasional process lapses and focus my promotion efforts on all the glowing customer feedback.  What happens when other staff perceive that the rule-bender gets more positive attention and even promotions?

Equally important in this assessment is a determination of why the rule-bender bends the rules in the first place.  Are there problems with the service desk to the point where customers understandably seek out help elsewhere?  Are there issues where, in the best interests of your business, the service desk should be bypassed?

Does your current process have negative consequences for behaving appropriately?

Just as baffling is the idea that there are actually negative consequences for performing appropriately.  These can be the most difficult to uncover, as they are usually the least intentional.  Again, these can be perceived or purposeful, and are frequently associated with positive consequences for behaving inappropriately.  Is someone following expectations around change request lead time also being punished by their boss for slow throughput?  Maybe a service desk agent is evaluated poorly due to a lower volume of incidents handled, while all along they were following the defined expectation of minimizing ticket re-assignments.  Can you add any of your own examples?

Being purposeful about formal and informal consequences is a critical part of any process re-engineering, improvement, etc. program.  The most thoughtful and well designed process changes are doomed to fail without thorough assessment and, if needed, remediation of competing and conflicting consequences.

What are your thoughts?

Sunday, April 6, 2014

Wherefore art thou, service desk?

This started as a reply to a blog written by ITSM consultant extraordinaire Barclay Rae. Go back and read that if you get the chance. My thoughts are intended to compliment what Barclay shared  and expand in an area that's had a lot of my attention lately. Where has the Service desk gone, what have we done to it, and how do we give it its proper place in ITSM?

We've worked so hard to define ITSM and ITSM software as "more than (just) the help desk" that we started to believe ourselves that the service desk doesn't matter. We (ITSM industry) have such a strong inferiority complex that we needed to show everyone else, the business and the rest of IT, that what we do is more than logging calls and managing trouble tickets. Along the way, we've forgotten that the people we put in between the rest of IT and the business actually matter. Instead we looked for easier and cheaper ways to do that function. This has given rise to:
  1. Outsourcing the service desk function entirely, and/or
  2. Scaling back on the quantity and quality of people we use to staff internal service desks
Over the past year I've come across more and more organizations downplaying the importance of the service desk. Even when facilitating Incident Management workshops, the service desk is barely mentioned. The service desk manager may even be present, but he/she recognizes that the people taking the calls are virtually irrelevant.

We've justified the easier, cheaper goal based on research  that tends to imply that customers don't want to talk to us anyway. The service desk is an anachronism of days long past when customers had no other options.

I'd argue that internal technology support is significantly different than most transactional customer support; but I'm willing to set that aside for now. Let's go with the assumption that most customers of internal IT services prefer not to interact with your staff. So customers try to resolve issues on their own. What happens when they can't? They've googled the issue, searched through your knowledge base, asked coworkers for help, and the issue remains. It's clear that by this point their issue is not a simple password reset that self service can easily solve.

Now they need to call in professional help, so who do we have them call? Outsourced staff armed only with scripts for common issues, or internal staff of whom we expect little more than being able to transcribe the issue into a ticket. These are the people you want representing the face of your services to your business?

Yes, we're probably right that automating simple service transactions are a good idea. But that doesn't mean we can scale back on the people who do engage with our customers. What it means is that the people who do call for help can be divided into two groups:
  • Those with more complex issues that aren't easily resolved by standard repeatable steps, OR
  • Those who prefer not to use self help, and need more hand holding
We need people taking those calls to have a broad technical base, so they understand how to triage and diagnose issues that may have multiple causes across technology AND business disciplines. They must understand how our specific business works. And we need people with the skills to empathize and listen, who actually enjoy helping less tech savvy customers achieve results.

Instead, we give our customers the polar opposite of what they need. We give them outsourced staff who know nothing about our business, and staff with minimal technical breadth.

The old model was to staff the Service desk with entry-level systems analysts. The current model staffs the Service desk with the cheapest resources we can find. I propose that what is needed going forward are service desks staffed with junior business analysts or junior relationship managers. Let's use people who are focused on technology breadth and customer outcomes.

Crazy idea, huh?

Friday, February 14, 2014

ITSM Cannot Live on Process Alone

I've received great feedback on my article, "Process Improvement is not Service Improvement". I was in the process of responding to a comment, when I realized the response probably needed its own article.

Process improvement is frequently, maybe even almost always, a component of service improvement. What I'm saying is that it's a bad idea to use process CSF's and KPIs as the desired outcome of an improvement project. I've come into client projects where the goal was something akin to "improve Problem Management to CMMI level 3". That's a noble purpose. The conversation might look like this.
Me:  What is your project goal?
Client:  To improve the services we provide to the business. 
Me:  Why do you need to improve services? Is there a specific business driver? 
Client:  Reach maturity level 3 in Problem Management. 
Me:  OK, how will you know when you've reached it? 
Client:  We have some internal targets set. Once we've reached those, we'll have an outside audit done. 
Me:  Great! What do you hope to achieve by doing this? 
Client:  I told you. CMMI level 3. 
Me:  No, what I mean is, what is the business driver causing you to do this now?
Client:  The CIO talked about SOX compliance. There was a finding in our internal audit that needs to be addressed.  
Me:  OK, so the business driver is the remediation of a SOX audit finding? 
Client:  Yes ... (Sigh) ... but our outcome is to reach CMMI level 3.
Then we discuss how maturity level 3 may have nothing to do with addressing SOX compliance. The client agrees with that, but says executives determined that CMMI level 3 would provide what they needed to remove the audit finding.

OK, now we're getting somewhere. It turns out that HOW achieving level 3 remediates the audit finding was never shared with the project team. All they know is that they need to achieve level 3 to remediate an audit finding.

Does having a process success factor, reaching Problem Management level 3 maturity, inherently improve the service provided to the business? No. There's an excellent chance it will even increase the cost of providing services. If we're going to increase costs, there better be an inherently clear business purpose for doing so, and that purpose should clearly provide more benefit than the added costs.

How many different process changes could you implement in order to achieve CMMI level 3? What does it mean to achieve level 3? How do we know that the process changes put in place in order to reach level 3 will actually resolve the audit finding? It's possible that your process changes help you achieve level 3, but do not address the audit finding. This is a recipe for disaster. The "service improvement" effort is based on a process CSF, which was selected in order to meet a compliance issue, and we know that making the necessary process changes may not even resolve the compliance issue!

This isn't a unique example. Plug in terms like "reduce incidents", "improve SLA achievement %" or just about any other process based metric you want.

There is no direct correlation between achievement of the process goal and improvement of services provided to the business.


You might end up improving services, but you could just as easily increase costs of providing services with no business-visible improvement in those services.

Process improvement is almost always part of service improvement. They compliment each other very well, but they are not the same thing. Before embarking on any sort of service improvement program, whether it is continual or one-time, make sure the desired business value is clearly defined before you start defining any process CSFs or KPIs.

Failure to do so dooms your program before it even starts.

Saturday, February 8, 2014

Process Improvement is not Service Improvement

What is the statute of limitations on ITSM transgressions? I hope they have long past, since I am now confessing some of my past sins.

I used "process improvement" interchangeably with "service improvement".

There. I've said it on the Internet, where everything is indisputable fact. Good to unburden myself like that. It's like a good cleanse.

I'm amazed by how often I come across CSI (Continuous Service Improvement) efforts that list things like fewer incidents or process efficiency as the primary goals. But again, I once did the same thing. Much ITIL and process maturity direction tries to sell the idea that "better" process is all we need to do to reach better service offerings. Process metrics like first contact resolution, mean time to restore, and self service growth are constantly presented as THE way to measure success in IT service management. Of course there are outliers who offer different measures, but the fact that they are outliers speaks volumes.

Here's the confusing part: We are providing a service that is also a product. Customer interactions with individuals delivering the service are part of that service. We call those interactions "providing service" or "customer service". This word service is used all over ITSM, so it's easy to confuse the service product with the activity of customer service.

Let's be clear. Incident management is not a service. It controls a process or series of activities done in order to restore a service to normal working state. Measures of incident management or customer service have no direct correlation to the willingness of customers to consume your service-product. Those measures may have an indirect connection. Please allow me to paraphrase Deming:

You can have great customer service and zero service customers.

(Or in ITIL speak: You can have great service processes and zero service customers.)

This is also true: You can have lots of service customers and lousy customer service (or lousy service processes).

While service process quality is certainly a variable of overall service quality, they are not synonymous. More or better process definition does not mean better service. More process can even lead to degraded overall service-product. (See "Is your IT Team and Budget a Victim of Process Over-Engineering?")
It's well documented that I am not a fan of First Contact Resolution as a success metric. I'll take it a step further and say that ITIL process metrics should never be used to measure service success. Process improvement does not mean service improvement. Service Request process improvement to CMMI level 4 doesn't mean you are delivering an improved service. It can be a tactic used to reach improved service, but it can just as easily end up being an expensive boondoggle that does nothing to improve your service-product.

Service Improvement is about value. Do my customers feel they receive sufficient benefit from their cost investment? Do they like doing business with me?

I've come across ITSM practitioners where their CIO had set a goal of reducing incidents by 2%. And that's their CSI initiative. What the heck does fewer incidents have to do with service value? Enough!

I've come clean. Are you ready to as well?

Edit: I've posted a follow up article to address some questions.

Tuesday, December 31, 2013

The ITSM Value Proposition is Incomplete


Business value of IT services is often defined as a simple formula.
Value = Efficiency + Effectiveness
ITIL puts it this way.
Value = Utility + Warranty

Over the past few years I've come to believe that we are missing a key ingredient: Customer Experience. Customer service is not the same as customer experience, although they are related. Customer service is interested in making the customer as happy as possible, after a service impacting event has occurred. Customer experience is interested in how a consumer of a service interacts with the service through the life cycle of a service event.

If I order a laptop for a new employee, customer service wants me to feel good about the outcome of the request. Customer experience is concerned about how I go about making the request, and any interactions I have during the process. Can I check the status through a portal? How easy or difficult is that to do? Was I able to find what I wanted quickly and easily? Was the actual request easy, and comprehensive enough for me to feel comfortable that I will receive value for what I paid?

A recent experience got me thinking about the relationship between service efficiency, effectiveness, and customer experience.

My family ran into some problems at dinner several weeks ago. Two of our three entrees were wrong to the point of needing to have them sent back and re-made. The third entree was prepared below expectation, but not to the point where my wife was willing to send it back and wait. Considering the moderately upscale restaurant context, I expected better accuracy in the orders, and faster turnaround when the two entrees were re-made. The server did an admirable job in taking care of us after the errors, and she did not throw anyone else under the bus -- a good lesson for all service practitioners.

Then the bill came. It itemized our entire meal, including making the two incorrect entrees complimentary, or so I thought. It turns out that the two incorrect items were on the bill twice each. The comp entrees appeared a little further down. I asked the server, since it looked like they intended to comp the two entrees, but accidentally added them back in a second time. The server thought it was weird, too, and went to check with her manager. The manager stopped by (a loooooong time later) and explained that the bill was right. For inventory purposes they added each item that had been prepared, and simply removed the cost of the items we had returned. In other words, no comps. We simply didn't have to pay for the incorrect entrees that we returned. To keep the inventory accurate, they needed to add each item to the bill, subtract the returned items, and then add back in items we accepted.

To be fair, the manager did end up making our appetizer complimentary; but it made me think about efficiency, effectiveness, and customer experience.

As a customer, I would have been OK if the bill simply included the items we kept. Why did my bill need to reflect the items prepared but sent back and comped? The answer was inventory accuracy, but was that a good answer? Including the returned items was good for service efficiency. Depending on how the restaurant sees service effectiveness, it could be considered good or bad effectiveness. Customer experience, however, was negatively impacted by using this method of keeping inventory straight. I wouldn't have thought twice about it if the bill simply itemized the food that was accepted. My expectations changed when I saw that some items on my bill were comped. At that point I started to wonder why the entrees weren't comped or discounted.

Did the restaurant do themselves a disservice by making something irrelevant to the customer (accurate inventory levels) so clearly visible to the customer? This was a minor issue to me, but I wonder how often IT service providers do something similar in the name of process efficiency? Do we properly consider customer experience while designing and delivering services? I believe the answer is frequently "No".

This can be as simple as how we respond to a customer status inquiry. Which response provides a better customer experience?

  • Let me look up your tickets. I see that Bob updated the request 2 days ago, but I'm not sure what's happening now. He sometimes forgets to keep tickets updated, so I'll have him send you an update.
  • Let me check into this and get back to you. When is the best time to call you back?

It can also be more ingrained into service design. I come across many clients that ask their customer to fill in numerous, possibly confusing, fields looking for specific details during an initial request. It certainly is more efficient, and you could argue about effectiveness as well. I doubt, however, that it adds to a positive customer experience.

I once overheard an IT staffer comment, "We need to get these people to understand how to make a request we can work with". One goal the team identified was to reduce the number of incoming requests. Seriously. During the discussion it became clear that what they really wanted was to increase the value of each request. It was simply the difference between looking at it from an IT efficiency perspective versus looking at it from a business value perspective.

Business value of IT service is no longer just about utility and warranty. Customer experience is a crucial component of value. Nobody outside of IT is excited by the reduced hardware costs of virtualization, while the overall cost of IT continues to grow. That's no better than subtracting a line item from my bill, and then adding it back in a few lines later. There might be a good explanation, but the business customer doesn't care unless it adds to their experience.

Friday, October 4, 2013

What makes for a compelling metrics story?

This article is cross posted at The ITSM Review.

In my first article “Do your metrics tell a story?” I discussed the “traditional” approach to reporting metrics, and why that approach is ineffective at driving action or decisions.

Personal observations are far more effective. Personal observations appearing to conflict with the data presented can actually strengthen opposition to whatever decision or action the data suggests. Presenting data as part of a story reboots the way we receive data. Done well, it creates an experience very similar to personal observation.

So how can we do this well? What makes a compelling metrics story?

Every element must lead to a singular goal

This cannot be stressed enough. Any metrics story we tell must have a singular purpose, and every element of the package must exist only to achieve that purpose. Look at any report package you produce or consume. Is there a single purpose for the report? Does every piece of information support that single purpose? Does the audience for the report know the singular purpose? If the answer to any of these questions is no, then there is no good reason to invest time in reading it.

ITSM legend Malcolm Fry provides an excellent example of the singular goal approach with his “Power of Metrics” workshops. If you haven’t been able to attend one of his metrics workshops, you are truly missing out. I had the honor when Fry’s metrics tour came through Minneapolis in August 2012. The most powerful takeaway (of many) was the importance of having a singular focus in metrics reporting.

In the workshop, Fry uses a “Good day / Bad day” determination as the singular focus of metrics reporting. ThoughtRock recorded an interview with him that provides a good background of his perspective and the “Good day / Bad day” concept for metrics. The metrics he proposed all roll up into the determination of whether IT had a good day, or a bad day. You can’t get clearer and more singular than that. The theme is understood by everyone: IT staff, business leaders … all the stakeholders.

There are mountains of CSF/KPI information on the Internet and organizations become easily overwhelmed by all the data, trying to decide which CSFs and KPIs to use. Fry takes the existing CSF and KPI concepts and adds a layer on top of CSFs. He calls the new layer “Service Focal Point”.
The Service Focal Point (SFP) provides a single measurement, based on data collected through KPIs. Good day, bad day is just one example of using SFPs. We only need to capture the KPIs relevant to determining the SFP.
(Fry also recently recorded a webinar: Service Desk Metrics — Are We Having a Good Day or a Bad Day? Sign up, or review the recording if you are reading this after the live date).

Create a shared experience

A good metrics story creates a new experience. Earlier I wrote about how personal histories – personal experiences – are stronger than statistics, logic, and objective data in forming opinions and perspectives. Stories act as proxies for personal experiences. Where personal experiences don’t exist, stories can affect opinions and perspectives. Where personal experience does exist, stories can create additional “experiences” to help others see things in a new way.

If the CIO walks by the service desk, and sometimes observes them chatting socially, her experience may lead to a conclusion that the service desk isn’t working hard enough (overstaffed, poorly engaged, etc.) Giving her data demonstrating high first contact resolution and short caller hold times won’t do much to change the negative perception. Instead, make the metrics a story about reduced costs and improved customer engagement.

A great story creates a shared experience by allowing us to experience similarities between ourselves and others. One of the most powerful ways to create a shared experience is by being consistent in what we report and how we report it. At one point in my practitioner career I changed metrics constantly. My logic was that I just needed to find the right measurement to connect with my stakeholders. It created the exact opposite outcome: My reports became less and less relevant.

The singular goal must remain consistent from reporting period to reporting period. For example, you may tweak the calculations that lead to a Good day / Bad day outcome, but the “storyline” (was it a good day or a bad day?) remains the same. We now have a shared experience and storyline. Everyone knows what to look for each day.

Use whatever storyline(s) works for your organization. Fry’s Good day / Bad day example is just one way to look at it. The point is making a consistent story.

Make the stakeholders care

A story contains an implied promise that the story will lead me somewhere worth my time. To put it simply, the punch line – the outcome – must be compelling to the stakeholders. There are few experiences worse than listening to a rambling story that ends up going nowhere. How quickly does the storyteller lose credibility as a storyteller? Immediately! The same thing happens with metrics. If I have to wade through a report only to find that there is ultimately nothing compelling to me, I’ll never pay attention to it again. You’ll need to work pretty hard to get my attention in the future.

This goes back to the dreaded Intro to Public Speaking class most US college students are required to take. When I taught that class, the two things I stressed more than anything was:
  • Know your audience
  • Make your topic relevant to them
If the CIO is your primary audience, she’s not going to care about average call wait times unless someone from the C-suite complained. Chances are good, however, that she will care about how much money is spent per incident, or the savings due to risk mitigation.

Know your ending before figuring out the middle of the story

This doesn’t mean you need to pre-determine your desired outcome and make the metrics fit. It means you need to know what decisions should be made as a result of the metrics presentation before diving into the measurement.

Here are just a few examples of “knowing the ending” in the ITSM context:
  • Do we need more service desk staff?
  • How should we utilize any new headcount?
  • Will the proposed process changes enable greater margins?
  • Are we on track to meet annual goals?
  • Did something happen yesterday that we need to address?
  • How will we know whether initiative XYZ is successful?

A practical example

Where should we focus Continual Service Improvement (CSI) efforts? The problem with many CSI efforts is that they end up being about process improvement, not service improvement. We spend far too much time on siloed process improvement, calling it service improvement.

For example, how often do you see measurement efforts around incident resolution time? How does that indicate service improvement by itself? Does the business care about the timeliness of incident resolution? Yes, but only in the context of productivity, and thereby cost, loss or savings.

A better approach is to look at the kind of incidents that cause the greatest productivity loss. This can tell us where to spend our service improvement time.

The story we want to tell is, “Are we providing business value?”

The metric could be a rating of each service, based on multiple factors, including: productivity lost due to incidents; the cost of incidents escalated to level 2 & 3 support; number of change requests opened for the service; and the overall business value of the service.

Don’t get hung up on the actual formula. The point is how we move the focus of ITSM metrics away from siloed numbers that mean nothing on their own, to information that tells a compelling story.

If you would like guidance on coming up with valid calculations for your stories, I highly recommend “How to Measure Anything: Finding the Value of Intangibles in Business” by Douglas Hubbard.
… and a few more excellent resources: