What CX leaders should know about artificial empathy

Home What CX leaders should know about artificial empathy

It’s complicated

First of all, the health sector is pretty notorious for its lack of empathy. Stephen Trzeciak and Anthony Mazzarelli, two physician scientists have even been talking about a compassion crisis in healthcare. In fact, 71% of respondents of a 2019 study said they experienced a lack of compassion when speaking with a medical professional, and 73% said they always or often feel rushed by their doctor. And that was even before the pandemic, and the ensuing mental health crisis. So if there is one domain in which it is easier for an AI to “win” in empathy and compassion, it’s probably this one.

But that does certainly not mean that we should underestimate this evolution. We’ve been seeing it everywhere, actually, not just in healthcare. I remember this post from an employee of Endurance, who was a little dismayed by some of the responses to a very vulnerable post on LinkedIn about someone tragically losing a loved one, which she described as “tone-deaf and lacking in empathy”. As an experiment, she asked ChatGPT to respond, which did a pretty good job, compared to the humans:

“I am deeply sorry for your loss and I know how difficult it can be to cope with the death of a loved one. It’s important to remember that it’s okay to feel whatever emotions come up, even though it may be difficult. Please know that I’m here for you and I’m here to listen if you need someone to talk to.”

Now I know that all of you are probably thinking “yes, but a bot can’t really feel empathy, they only mimic it, so that does not count”. Well, I don’t want to be cynical here but doctors who do show sympathy are paid to do so, so that’s perhaps also not ‘real’ in the strictest sense. It’s part of an exchange. What does matter though, to use Maya Angelou’s words, is “how you make people feel”. And even if it’s a bot making you feel better, we’ll probably get used to that. Just like we stopped talking about how online interactions weren’t the same as real ones. There’s currently this shift happening, where bots are becoming better at mimicking humans and that’s an extremely fascinating domain for those of us working in CX.

A flawed system

But perhaps the most interesting thing here is not the fast evolving technology, but how it exposes gaps in our ‘old’ systems. Crises are not just difficult periods. They tend to uncover what is wrong with the system. The war between Ukraine and Russia triggered an energy crisis, for instance. But that crisis was only able to happen because Europe was overdependent on one nation for its gas supply. It’s always better to diversify. Because when you put all your eggs in one basket, and it’s taken away… Well, we know what happened there.

In exactly the same way, the doctor versus AI challenge is (at the moment) not just about a technology getting so powerful that it is better than humans. But it rather exposes a fragile and poorly constructed system where physicians are overworked, burnouts are raging and 56% of physicians said they just “don’t have time for compassion” (in 2012!). Based on a survey conducted by the American Psychological Association, 45 percent of psychologists reported feeling burned out in 2022. Nearly half also reported that they were not able to meet the demand for treatment from their patients.

That is why I believe that this is a crucial moment. We have two choices: we keep the flawed systems and mitigate their gaps with technology or we use the technology to make the system better for employees/doctors as well as customers/patients. We can’t just leave the empathy to the bots of this world, while for instance doctors keep in the background with very little patient interaction. We must have the bots help the doctors so that they have more time, and more compassion for the severest cases, that really need human empathy and support.

There is a big difference between the two and to truly understand that is really essential.

A human transformation

What worries me, though, is that the study concluded how doctors could use AI-assistants like ChatGPT to “unlock untapped productivity” by freeing up time for more “complex tasks”. “Well, Steven is that not the exact same thing as you state here above”, you might think? I don’t think so. Because this clearly shows that, here too – exactly as is currently the case in the current health sector – the focus is on productivity (to the organization’s benefit, and not that of the employees or customers) instead of on patient care, compassion and empathy.

To end on a positive note: it’s great that we are realizing this now. Let’s not put an AI icing on an old, mouldy, flawed organizational cake. Let’s make a better, tastier cake and add a superb frosting. It’s the same as always, really, everything inside our company – from how we treat our employees or suppliers, to how we communicate, set KPIs, organize processes etc. – has an impact on our customers. We cannot fix that with technology. We must first fix those flawed processes, KPIs, communications etc. and then make them better with technology.

Every digital transformation calls for a human transformation first.

In fact, the digital part will be the easy one. You can find companies who will get you all the AI you need. You can train your people to work with it. But the human transformation – where we really focus on skills and organizational development – will be the real challenge in my opinion. So I would advise you to go for a parallel track: when you invest in AI, always invest the same amount of energy and time in fixing your human system so that you will be able to differentiate yourself as a human company in a world of automation.