The trouble with great chatbots
Associated with
Shep Hyken Shep Hyken
6 min read
The trouble with great chatbots

As organizations experiment with how to get the most out of their AI investments, one use case is clearly ahead of the pack: customer support.

Nearly 9 out of 10 executives polled by IBM believe their companies will be using generative AI to interact with customers in the next two years. Investment in gen AI support solutions is increasing at around 25% per year and is on track to exceed $3 billion by 2033.

Automating different facets of customer support doesn't just reduce costs; it has the potential to deliver a superior customer experience. The most advanced AI chatbots, in fact, can even seem more empathic-more human, if you will-than flesh-and-blood service agents.

Case in point is a recent study by researchers at the University of Southern California, which found that AI chatbots were better at detecting customers' emotions than human support staff. By offering more emotional support (and fewer helpful-but-less-empathetic suggestions), the bots allowed customers to feel heard in a way that the service reps didn't. According to the study, customers "reported a higher sense of hope, reduced distress, and decreased discomfort from reading a response actually generated by AI."

But there's an enormous fly in this ointment: When the test subjects were told that the sympathetic support rep they were chatting with was AI, their ratings flipped: They felt less heard by the robots.

More Ways to Read:
🧃 Summarize The key takeaways that can be read in under a minute
Sign up to unlock