Playbook

How to Automate Customer Support (Without Losing Your Best People)

A Practical Playbook for Support Leaders

18 min read Support Leaders, Operations Managers Updated January 2026

The Pressure to Automate Customer Support Is Real

If you lead a support team right now, you're fielding pressure from at least four directions at once.

Your CFO wants cost savings. They've seen the vendor claims about 60% cost reduction. Your CEO read about Klarna and wants to know why you're not doing that. Your best agents are quietly updating their CVs because they've read about Klarna too. And the vendors are circling, promising everything will be fine if you just sign the contract.

Here's what makes it harder: the people with the strongest opinions usually have something to sell. The optimists want your budget. The pessimists want your clicks. Neither has to clean up the mess if they're wrong.

One thing nobody tells you: every vendor demo you've ever seen used cherry-picked tickets. The "where is my order" query with a single tracking number. The password reset with no complications. The FAQ question that matches their training data perfectly.

When you're evaluating vendors, ask them this: "Show me the 10% of tickets your AI handles worst. Show me escalation rates from your live deployments, not pilots. Show me what happens when the customer is already angry from a previous failed attempt."

Watch how they respond. The honest ones will have answers. The others will pivot back to the demo.

AI can genuinely transform support operations. It can also fail publicly and expensively. The difference isn't the technology itself. The difference is implementation speed. The companies that failed went too fast. The companies that succeeded treated automation as a destination, not a switch.

This playbook is about the slower, boring path that actually works.

Let's look at why healthy skepticism makes sense here.

The fastest way to sabotage an AI implementation is to lead with "this will replace agents." It sounds obvious. But that's exactly how most AI vendors pitch their products. Headcount reduction. Cost savings. Efficiency. Fewer humans, more machines.

Look, the fastest way to sabotage an AI implementation is to lead with "this will replace agents."

Klarna went too fast

This fails. Here's how (and notice that these are three different failure modes, not one):

Klarna went too fast #

You've probably heard about Klarna. In early 2024 their CEO was everywhere, talking about how their AI was doing the work of 700 agents. They froze hiring and let headcount fall to around 3,000. Wall Street loved it.

They're still using AI - it's not that the technology failed. They just optimized for the wrong thing. In my experience, this is the most common failure mode: treating cost reduction as the goal rather than a byproduct of quality automation.

Here's the CEO explaining what went wrong to Bloomberg: "Cost unfortunately seems to have been a too predominant evaluation factor... what you end up having is lower quality."

They're still using AI. It's not that the technology failed. They just optimized for the wrong thing. In my experience, this is the most common failure pattern: treating cost savings as the goal rather than a byproduct of good automation.

McDonald's picked the wrong channel #

Some channels are harder than others. Text-based chat is easier than voice. Structured requests are easier than open-ended ones. McDonald's learned this the hard way.

Their AI drive-thru pilot went viral for all the wrong reasons: bacon on ice cream, $222 worth of nuggets nobody ordered, customers fighting with robots that couldn't understand "just a water, please." They shut it down at over 100 locations in mid-2024.

Drive-thru ordering has background noise, accents, interruptions, and ambiguous requests ("make it a meal," but which meal?). The technology wasn't ready for that environment. It might work fine for text-based support, where the input is clean and structured.

Air Canada got sued for what their bot said #

Their chatbot told a customer he could claim a bereavement discount up to 90 days after flying. That policy didn't exist. When the customer asked for his money back, Air Canada said no. A tribunal said yes, and ordered the airline to pay.

Here's the part that should worry you: Air Canada tried to argue the chatbot was "a separate legal entity." The tribunal didn't buy it. If your AI says something, you said it. No audit trail, no human review, no defense.

Each company skipped the training phase. They went from "let's try AI" to "AI is handling tickets" without the middle step where humans review AI outputs, calibrate accuracy, and build confidence.

And here's something worth saying directly: "escalate to human" is not a safety net. If 40% of your AI conversations end up escalating, you haven't automated anything. You've added a step. You've made the customer explain themselves twice. You've handed your agents half-finished tickets with missing context.

What's a healthy escalation rate? It depends on your ticket mix, but in my experience, well-implemented Tier 1 automation should see under 15%. Above 25%, something's wrong with your AI's scope, training, or confidence calibration. Above 40%, you're actually making things worse.

When your agents see these stories, they don't think "that won't happen here." They think "I'm next." And given what they're reading, their concern is rational.



If you've been hesitant about AI, you're not being a luddite. You're being prudent. AI works. The real question is whether you can implement it without falling into the cost trap, the technical failure, or the liability gap.

You can. But it takes longer than the vendors suggest.

Get the implementation roadmap

Enter your email to unlock the week-by-week implementation guide, team transition framework, and measurement strategies.

We'll send you occasional content like this. Unsubscribe anytime.

Ready to start your automation journey?

Hay helps support teams automate up to 80% of tickets while keeping humans in control.

Start your pilot