The future of AI and what Customer Experience leaders can learn from it
The killer applications
When it comes to killer applications in AI, Jonathan seemed most enthusiastic about what he called revolutionary “behind the scenes” applications in the industrial world. He gave the impressive but almost invisible (to the public) example of quality control for French fries, where these extremely fast scanning AI systems spot and cut away black spots on fries. It reminded me of a case from one of my customers, a wood production company: it scans the wood structure of a specific tree and then shows consumers what it will exactly look like on their floor, allowing them to pick out their preferred pattern and tree.
The returns of these industrial or agricultural AI systems for the end user go beyond hyperpersonalization and quality control, though. Sometimes they can even influence our health. Jonathan gave the example of AI enabled greenhouses that can create an ecosystem where good parasites kill bad parasites instead of needing to use chemicals – that are also bad for the end user’s health – to destroy them.
The value of these AI applications does not just trickle down to the end user – in terms of quality control, efficiency and cost savings – it impacts employees as well, of course. “Robot picking with an AI camera system can create tremendous value for repetitive, difficult or dangerous labor”, Jonathan explained. “Without it, you might otherwise need large pools of people looking for 24 hours at production systems, which is hardly a fulfilling job for the humans executing it.”
“Another very popular AI application is of course everything which has to do with go-to market, marketing, sales, knowing what type of customers want your product or how they behave over social media”, Jonathan continued. He gave the example of tooling apps that can recreate the result of a home renovation in order to convince the end user. Or that in a couple of years, your architect will provide the 3D file of your house and you can already shop for your bed, floor, curtains and whatever you feel like. Even before one stone has been built. Or you could reach out to a marketplace with aesthetic advisors that might style your living room “no cure no pay”, with Facebook votes among friends and if the end rating is for instance above eight, they receive the go for the project.
The last beacon: creativity or empathy?
When people talk about AI, it never takes long before the word creativity drops, what Jonathan called “the last beacon of humanity”. Once AI becomes creative, humanity will be in trouble because everything what we can do, technology will be able to do too, possibly even better. That’s what most people believe. I, however, am convinced that there is one “obstacle” for tech to take that’s even bigger: empathy. I absolutely believe that computers can be creative. We’ve seen it with DeepMind already and with AlphaGo which created moves that were truly original and creative. But I don’t believe that a machine will ever be able to show empathy. And so, the challenge is to make sure, as a human, that you excel in those fields where computers lack the right functionalities: focusing on areas of emotional behavior, which go far beyond ‘just’ tracking emotion. That’s a true opportunity.
Though AI systems will be able to track emotions and know in which emotional state you are, Jonathan too believes that they will never be empathic because, for one thing, they don’t have five senses, like us, and thus can’t ‘feel’: “you cannot get something which is not like you to be truly empathic with because you don’t have the same consistency”. But he does believe that AI will be able to mimic empathy at one point, in a very believable way. “GPT 3 for instance, could be able to detect a psychotic episode much earlier than a normal human being through your Whatsapp messages, if it had been fed enough data of similar episodes. So AI could be a true aid in psychiatry and psychology. But it will never be true empathy. It would be more like a psychopath mimicking that feeling. So there’s a lot to be thought about on the ethical side.”
Overfiltering and overpersonalization
Another great concern of the public when it comes to AI, is the filter effect. When you ask Amazon Alexa to purchase batteries, they will decide about the brand, based on your purchase history or perhaps based on your propensity for discount products. It will become very difficult for brands to get through that filter, once they lost the connection with the end customer. A lot of people are very scared of that.
Jonathan believes that that’s one of the biggest mistakes of the AI community, this dynamic of over-customization creating so-called echo rooms on Facebook and Google. Almost like having our own personal Truman Show where we perceive the world from inside our filtered bubble only. That’s not the fault of the AI, of course, but of the overarching capitalist system. The goal of the AI system was programmed by a company and if the direct goal of that company is not to contribute to human happiness and wellbeing, then you get these scenarios that the Social Dilemma documentary talks about: “like a Frankenstein that is exploiting our dopamine systems”, as Jonathan called it. “What needs to be solved in the next five years in order to come back to the true goal of our society”, he continued, “is that we start to contribute to a collective happiness that is better than the one of yesterday.”
Do we need a Linus Torvalds for AI?
According to Jonathan, a solution for the problem with Big Tech and AI could be similar to what happened to the mainframes in the seventies. “We might see the rise of some sort of Linus Torvalds for AI and society who will build truly open source social networks. If people know that their privacy will be respected, no matter what, then they might gladly share their DNA to help research and help others. If the network is truly social, rather than commercial as it is now, then we’ll start to really see the positive effects of this type of platform.
Though I do think that that’s a really interesting idea, scaling these types of open source platforms – without the well-oiled and well-funded marketing machine that Big Tech giants have at hand – will not be an easy feat. Just think of Signal, for instance, that was the most downloaded app, with about a million downloads a day, at the height of the privacy statement controversy around WhatsApp. That sounds like a lot but at that rate and after a year, they would have 365 million new users. Which means that they would have to do that for six years before they reach the 2.5 billion users that WhatsApp currently has. Unless the curve becomes exponential, of course. My point is that moving a population to an another platform is really hard when there is one that already dominates.
We ended the conversation on a hopeful note, though, as Jonathan believes that “there is some self-regulation within society: if something gets exaggerated then we’ll see a dynamic to another one that balances that out”. “If a critical mass of people is aware of this type of disasters – like for instance Facebook and Cambridge Analytica – then you’ll have disproportionate rise of Signal-like platforms”.