Five crucial ethical questions in the future of Customer Experience

Home Five crucial ethical questions in the future of Customer Experience

POWER – How much power should (tech) companies be allowed to have?

Every ethical question that will follow after this one, has its roots in this first essential query: how much power should influential tech companies like Google, Facebook, Apple and TikTok be allowed to have on consumers and – in consequence – on society?

Every new technological revolution – from the first, industrial revolution, and the age of steam and railways to the current information age – produces giant pockets of power. When we talk about the tech titans of today, we seem to forget about that. We always had trouble finding a balance between powerful companies, consumers and governments, who should be protecting the interest of consumers. Standard Oil – the American oil-producing, transporting, refining and marketing – was for instance ruled to be an illegal monopoly in 1911 by the U.S. Supreme Court and it was dissolved into 34 (!) smaller companies. The big difference between then and now, is that the tech companies have so much more information about consumers. But back then, companies were just as big and powerful and their impact – on the elections, on economy, on the environment – was just as all-encompassing.

Up until now, it was always the governments that took care of breaking up the monopolies. Only recently, a report from top Democratic congressional lawmakers issued that Amazon, Apple, Facebook, and Google are engaging in a range of anti-competitive behavior, and that US antitrust laws need an overhaul to allow for more competition in the US internet economy. “To put it simply, companies that once were scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons,” the report’s introduction states.

Make no mistake, though, this is not just a legal or economical matter. Allowing these types of monopolies the power to act as they want and allow them the power to control so much data from so many channels has also a huge impact on the end-game of customer experience. Where do we draw the line to their power: can they convince us how to buy and how to dress? But does that then also mean that we need to allow them to influence how we vote, and then how we feel?

Where do we draw the line? That is a big, hairy, complex questions that’s increasingly difficult to answer with each passing year. Just think of the inability of Twitter and Facebook to control the disinformation around the elections. It shows that even they cannot control their own power. So how are our governments – traditionally very slow and laggard institutions – going to protect consumer rights and steer the customer experience in ethical directions? Especially now that so many companies are crossing over industry lines, gathering data from consumers in so many different environments, enhancing their knowledge and power with every click. And let’s not forget about the problem of radicalization and move to the far-right, resulting in governments that are not very interested in protecting consumer rights.

That power is no longer just about networks of influence, money or even scale. We have always had these problems, as I pointed out. It’s about data. And today, the result is that even a relatively small digital company can have an incredible amount of knowledge about consumers, and thus power.

So, again, where do we draw the line, on:

  • The right to privacy
  • The right to be treated in a trustworthy manner
  • The right to happiness
  • The right to equality

And more importantly: who will draw that line? Will it be the governments? Will it be self-regulating systems within the tech companies? Will it be full algorithmic transparency? Or will it be the consumer, through technologies like blockchain, who claims back the power over his/her own data? Only time will tell.

PRIVACY – Who can use our data?

This is one of the biggest ethical questions of our times: who can use our data? The common adage used to be: ‘there is no such thing as a free lunch’. When we decide to use ‘free’ services and highly convenient functionalities, we pay with our data and then forfeit the rights to this data. But for the past years, and under the influence of people like Tristan Harris (The Social Dilemma), people are no longer satisfied by this trade-off that seems highly unbalanced in retrospect.

Take facial recognition, that lies at the root of (potentially) so many fantastic customer experiences. Facial recognition can allow us to walk in and out of a store without needing to do anything to pay. It allows us to walk in and out of a plane fast and safely. It allows us to pay with a smile. Great, right? But it can also allow a medical insurer to diagnose high blood pressure, certain genetic diseases and a propensity to longevity, and demand for steeper fees. Understandable from their side, but definitely less great for us. But facial recognition also has the potential to analyze someone’s sexual preference. Is that something that we want?

And what if non-democratic governments get a hold of our data? China, for instance, has invested a lot in facial recognition. And the customer experience that companies over there are offering is mind blowing. But what when the government decides to use that data to irradicate dissent?

And that’s ‘just’ facial recognition. Just think of what this ethical discussion will mean when we have effective brain-computer-interfaces like Elon Musk is trying to create with his Neuralink. It will help people walk again, or see again. It could also make us smarter, or more empathic or less clumsy. But that also means that it could make us more aggressive, or increase our need for sugar or alcohol. And who will own the data that flows between the BCI and our brain? These are really big questions that we will need to answer.

HAPPINESS – who can control what we feel?

An even bigger question is the one about our personal happiness. Is the experience that companies are offering us making us happy? I am a true optimist, as you know, and I always tend to see the silver lining. I love that technology is offering us the convenience to save so much time that we can invest more in our family, our friends and our hobbies. Just think of all the stress and hassle that Amazon’s in-house delivery is relieving working parents of. No one likes to go grocery shopping, especially during peak hours. So that is a definite plus for me.

But we are gradually moving into an era when technology is not just tracking our behavior, in order to predict and influence us, but is zooming in on our emotions. Actually, technology has already been affecting our emotions for quite a while now, and often in a bad way. Social media, for instance, is well known to trigger teenage anxiety, depression, self-harm, and suicide, especially for girls. This has to do with the pressure of perfection that comes with the likes (or lack of it) from peers and the artificial use of filters that pass the message that the way we look is just not good enough. This has everything to do with the addictive design of social media which has been purposefully created to make us ‘hooked’. Edward Tufte elegantly explained this with “There are only two industries that call their customers ‘users’: illegal drugs and software”. Highly popular books have been written about this: Hooked by Nir Eyal for one, which wants to help companies build ‘habit-forming products’. When it was originally published in 2013, people thought it was fantastic. And it is clever, no doubt. But is it ethical? Should we be allowed to build something that is addictive?

But that was ‘just’ step one of tech impacting our emotions through an addictive CX. But now companies are increasingly trying to monitor our emotions. Amazon, for instance, recently unveiled Halo, as a competitor to Apple Watch and Fitbit, which goes beyond tracking health. The optional Tone feature listens to the user’s voice throughout the day and analyzes that information to present a picture of how they felt: showing the times they were feeling energetic, hopeful, or hesitant. For instance, the device might pick up on an argument, or a tense conversation at work, and indicate that the user felt elated at 10 a.m. but hesitant 30 minutes later.

https://youtu.be/t9rZWa1fabc

Microsoft, too, plans to embed Teams with a series of “wellness” tools to address mental health problems. These “Personal wellbeing experiences” will be added early next year: they will include “emotional check-ins” ( users can select an emoji expressing how they felt about the work day), a “virtual commute” (to allow time to reflect before and after workdays), and guided meditation sessions through a partnership with Headspace.

Now, hypothetically, these are both fantastic features. But if – before brands knew about our emotions – they already had such a far-reaching impact on our emotions, what will happen now? Analyzing and ‘using’ the emotions of consumers is a thin line to walk, ethically. We really ought to be managing this, before such emotive technologies become the norm. According to Gartner, By 2024, AI identification of emotions will influence more than half of the online advertisements you see. Is this ethical?

When it comes to technology playing on emotions, I also often think about that South Korean TV documentary Meeting You where Jang Ji-sung, who lost her daughter to blood cancer in 2016, ‘met’ her again as a very detailed VR simulacrum. Is this something that we should be doing? Do we want this? Is that even useful?

TRUST – do we trust the algorithms to make decisions that are good for us on the long term?

The next big ethical question is, if brands can monitor our behavior, and our emotions, do we trust them to make the right decisions? The laws of trust are fairly simple: most of us are quick to offer it, but we will take it away just as soon. If your housekeeper never steals from you, you will trust him or her. If (s)he does, you never will again. That works the same way for algorithms: if booking.com keeps suggesting hotels that turn out to be exactly what I like, I will trust booking.com, quite blindly. If it doesn’t, I won’t. A recent study even showed that people tend rely more on advice when they think it comes from an algorithm than a person. Trust is no longer an issue of digital versus human, but of delivering what you promise. It’s about continuity.

Now, when we will arrive at the age of automated buying – where algorithms will decide what we should buy and when – this trust will become increasingly important. Our fridge will decide if we need more milk. And it could decide to buy tomatoes, because everything is present in our fridge to make spaghetti bolognaise, except the tomatoes. But it could also ignore that information and buy everything we need for an intricate recipe so that we will spend more money. And what about white wine: it might notice that the wine bottles in our fridge keep getting emptied faster and faster. Should it anticipate that, and buy more of them? Is that ethical? Is that trustworthy behavior? Probably not, right?

That’s for a big part because there is a very big difference between what is “good” for us in the short term – like buying chocolate because we feel sad – and what is in the long term: ignoring the chocolate craving, maybe going on a long run and then drinking an apple smoothie. Our smart fridges could nudge us in either of both directions. But we will only keep trusting them if they offer us the ‘Day After Tomorrow’ approach: the one that is good for us in the long term.

Automated buying will free up so much time for us, but it also potentially has a very dark side which we really ought to think about.

EQUALITY – How do we stop algorithms from increasing inequality

It’s still the sad truth that there is a lot of inequality in the world, on many different levels. Algorithms can only judge the data that they are fed and since the majority of people working in tech are WEIRD – Western, Educated, Industrialized, Rich, and Democratic – a lot of the data they use is WEIRD to and thus exceedingly biased. That obviously has a lot of impact.

Facial recognition, for instance, works best on white male faces, because that is what the system is mostly fed. About two years ago, Amazon’s AI recruiting system seemed to have a serious problem with women as it had been programmed to replicate existing hiring practices, which turned out to be highly biased. The AI picked up on uses of “women’s” such as “women’s chess club captain” and marked the resumes down on the scoring system. On the basis of the biased data, Amazon’s system taught itself that male candidates were preferable.

The COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions) algorithm, then, was developed to predict the likeliness of a criminal reoffending; acting as a guide when criminals are being sentenced. It predicted that black defendants pose a higher risk of reoffending than white defendants and so black defendants were almost twice as likely to be misclassified with a higher risk of reoffending (45%) in comparison to their white counterparts (23%). Propublicaanalyzed the COMPAS software and concluded that “it is no better than random, untrained people on the internet“.

Technology has always had a tendency to magnify existing trends. And so, when it comes to the ‘broken’ parts of our society – that lie at the roots of sexism, racism, ageism and a lot of other biases – it unsurprisingly follows the same dynamic, because that is what it finds in our data. This should be high upon the agenda of every company that is investing in customer analytics: how can we make sure that our AI systems do not further boost the existing inequality?