Artificial Intelligence for Customer Experience
AI will be a game changer, but let's not repeat the mistakes of our (recent) past
I have been dabbling with Artificial Intelligence stuff since I was at the university (I am a Software Engineer after all). It is mainly related with what we call “algorithms” (the way we make computers do things) and “data structures” (how we make computers store its information so it can be retrieved later). Having participated in the prestigious ACM Computer Science Programming Contests1 (and winning several awards at it) made me an Artificial Intelligence junkie. I even went a bit further and was awarded an Artificial Intelligence certification.
So all of this mayhem created by the now popular “ChatGPT” and its siblings (like Dall-E2, capable of creating images from text describing them, etc.) took me by surprise (and I am sure it also did the same for most of the enthusiast community). Why? Because the GPT algorithms all of these products are based on are incapable of producing “true” Artificial Intelligence, or as we call it, General Artificial Intelligence (more on this later). This is a known scientific fact that is not in discussion.
If that’s the case, why many industry experts have raised their voices stating “the dangers of uncontrolled Artificial Intelligence development3”? There are diverse reasons, perhaps as many as people involved. First of all, the fear of social and economic disruption AI might suddenly bring if it receives widespread adoption without first preparing the job market and the people4. And perhaps the most important, the danger of developing a General Artificial Intelligence that might, for real, surpass us intellectually in every possible sense, and not being ourselves, as a society, ready for it.
But, if the GPT-based products aren’t (and won’t ever be) a General Artificial Intelligence, why the ruckus? Several prominent Artificial Intelligence scientists have also raised their voices pointing this out. Perhaps the most important has been Noam Chomsky’s New York Times’ article “The False Promise of ChatGPT5”. He clearly points out why GPT-based products aren’t a real threat, from the General AI point of view, and a moderate threat to the current workforce, if we consider it as “Narrow AI” assistant. And how those products lack the ability to produce counterfactual conjectures or causal explanations, limiting themselves to spew statistically generated responses based on the terabytes of data they have been fed upon. Simply, they do not “think”.
For us, Customer Experience specialists, all of this is of utmost importance. It is tempting to join the hype train and start thinking we will be able to replace our Customer Service Representatives with ChatGPT to cut costs, or even ask it to draft a proper Customer Experience Strategy for our business. But nothing can be further from the truth. We have a lot to learn from the near past trends and mistakes (from maddening telephone voice menus, commonly called IVRs, to the most recent service chatbots, that can make a distressed customer throw his smartphone away in anger). Stay with me, as we explore how this newly available tool can be used to enhance our Customer Experience capabilities, while avoiding the pitfalls of automation (old and new).
On this post we will cover:
What is ChatGPT and how it works
What are we trying to solve with AI for Customer Experience?
What can we do with the current AI capabilities?
Where are we headed?
What is ChatGPT and how it works?
We have already used the term “General Artificial Intelligence”. Let’s explain it a bit. Formally, the Artificial Intelligence experts divide the discipline in two: General Artificial Intelligence and Narrow Artificial Intelligence. General AI is the kind of Artificial Intelligence we are used to see in science fiction movies. General AI should be capable of what we define as thinking: reflexive reasoning, moral evaluation, abstraction, conjectures, self consciousness, etc. We will be able to establish an intelligent discussion with it, as if it were a human intelligence. Narrow AI is the kind of Artificial Intelligence we are currently used to: Tesla’s Full Self Drive Computer Vision, Natural Language Processing tools like Alexa and Siri, Google’s Image Recognition and Translation, etc. Narrow AI is focused on solving a very specific problem (recognize images, translate texts, etc.) and it achieves this by using algorithmic techniques that usually have nothing in common with the way the human intelligence approaches the same problem.
ChatGPT belongs to the second group: it is a Narrow AI. GPT stands for Generative Pre-trained Transformer (the name of the machine learning technique it uses). A GPT is an artificial neural network, that is fed with terabytes of data (OpenAI’s GPT-3 was trained with more than 40 terabytes of text data pulled from open sources like Wikipedia and others). Trying to simplify as much as possible the explanation of its inner workings, GPT ingests the text it is pre-trained with, and builds a network of probabilities between the terms. Those connections in the network are called “parameters”, and like with neurons, more connections means the network is more powerful (that’s why GPT-3 has 175 billions of parameters, and GPT-4 has been speculated to have more than 100 trillion parameters). Then, a GPT, when prompted by the user, builds an answer that is the probabilistically most accurate, using the probabilities it has stored.
Think of how your smartphone’s predictive text completion works: you start writing, then after a few characters, the phone offers you the three more probable words. Then you either select one, or finish writing a new one. Then the predictor suggests the three more probable words that might come after the last one. How does it do that? It has been fed with millions of texts, so it knows which words happen more often after the one you just wrote. Then it suggests the next one. And the next one, etc. Now imagine you can also do that with entire sentences, not only words. What is the next more probable sentence. Then with entire concepts/paragraphs. And then you find a way of doing it all in parallel (thanks to some algorithmic voodoo), not having to do it in sequence, like the phone text predictor. Voilá, you have ChatGPT.
As you can see now, ChatGPT is not magic… but being able to produce such complex and articulate answers only from tons of data and probabilities for sure deserve to be considered magic. But to use it responsibly, we must understand the basics of how it work, so we can be aware of its limitations. The first limitation you should consider is the training data. A GPT is only able to answer inside the boundaries of the information it has been fed, when its neural network was built. GPT-3, for example, only had information up to the last quarter of 2021. It ignores anything that happened after that. It also has only the information domains that was provided with. For example, there might be some medical, legal, astronomical, etc. concepts that it is not aware of, because there wasn’t any data about it on its training. So, what happens if you ask it about something it doesn’t know? Well, it tries to answer you with what it has, but it can end up making up stuff. Wildly. This is called an “hallucination”.
You solve this issue by performing additional training on the GPT model, with current data, and information specific to your domain. For example, Customer Experience and whatever your business is related to. Your customer support documentation and everything that you consider the model might need to generate a proper answer. Then you test it. Exhaustively. You simulate being a customer in need of support, and ask questions to the GPT. You check the responses. And tune the model, feeding it with more info, reinforcing the good answers or penalizing the bad ones. Eventually it will get there and be useful for your business. But it will require work. The stock ChatGPT doesn’t come out of the box ready for your business.
So, if we have a tool that produces text that seems written by a human, but a human that has been asleep since 2021, and it has the drive to answer, even if it doesn’t know anything about a topic, could we really put it in charge of servicing our customers? Angry customers? Customers that are in sore need of a solution… and fast? To answer that, we should first outline the typical tasks we humans currently do in the Customer Experience domain, and see where a properly trained Narrow AI could be of help.
What are we trying to solve with AI for Customer Experience?
If we are to divide the kind of work we need to do with Customer Experience in our businesses, it would probably be at the Strategic level and the Operations level.
At the Strategic level we collect information about the environment (both internal and external), analyze it (like SWOT and PESTEL methodologies), create objectives, craft metrics and indicators, define plans, Customer Journeys, etc.
At the Operations level, we track progress on the projects, control the day to day work, collect customer information through surveys, interviews, etc., receive customers’ calls, emails and visits, that we need to answer with proper solutions, or in some cases, escalating the issue to a supplier and tracking the resolution on behalf of our customer.
As you can see, at the Strategic level we require a lot of creativity and analysis. But also a great deal of data collection from diverse sources and calculations. While at the Operations level we have more tasks that can be automated, but also a lot more of direct interaction with customers, that will require careful responses, effective and timely solutions, and sometimes the ability to think out of the box or to escalate the issue to a third party. Let’s take into account that the customer is probably not very happy of having to contact our support channels, so if he feels mistreated or delayed, it could worsen the relationship. Narrow AI can help us on both scenarios, but we need to carefully choose which tasks are appropriate to apply AI to, and which ones require human intervention. We’ll go deeper on that later on this post.
The importance of NOT applying the wrong technology to the wrong problem cannot be understated. We have many examples of this in the past:
With the advent of the IVR (Interactive Voice Response) in the 70’s came the widespread idea that most of the work done by customer representatives on the phone could be automated. Tasks like getting your bank accounts’ balance, even making simple payments and transactions could now be done without the intervention of a human representative. This, of course, meant a considerable reduction in customer service costs, and 24/7 availability (a win-win for both the businesses and customers). But then, IVR became complicated. Layers and layers of menus to get to the desired answer, while many times the answer wasn’t the expected one, or even suffering repeated hangups from the IVR platform without solving the problem first. No direct way to contact a (costly for the business) human representative, while he was buried beneath many levels of interactive menus. IVRs became a symbol of bad Customer Experience.
Natural Language Processing (NLP) systems became stronger with both Machine Learning techniques and the increase in computing power. Tools like Amazon Lex became available to provide voice recognition IVRs. You no longer had to press numbers in your phone to state what you wanted to do, you could now speak your intentions and the platform would understand you, and ask you in a natural human language way the rest of the info it needed to collect, and fulfill your request. The same kind of NLP powers all kind of chatbots (Social Media, WhatsApp, websites, etc). But all of them are just IVRs now able to collect info and answer using natural language (without pressing buttons). They need a possible response tree pre-programmed behind to be able to work (layers and layers of menus). Same issue as before. How many times you have yelled angry to the bot: “I WANT TO TALK WITH A HUMAN REPRESENTATIVE!”?
And now, the all new and shiny GPT-powered chats (and soon voice systems). We have already discussed the difference between Narrow AI and General AI, and the current limitations. Even if the response might seem articulate and apparently correct, the risk of it being an “hallucination” exists, specially if the model hasn’t been properly trained for the knowledge domain where it is going to operate. This can lead not only to angry customers, but to malpractice lawsuits and liabilities that the business will have to absorb, because the automated tool cannot. Will we make the same mistake as we did on the recent past?
What can we do with the current AI capabilities?
So, are GPT and the rest of the models doomed? Not at all. AI Agents are tools. We, humans, are responsible for the success on the use of such tools. We need to carefully pick the situations where we can apply AI safely, and it will enhance our capabilities beyond what we can do now. But by understanding how they work, and their limitations, we can choose the right time to integrate them into our workflows, for the best results. We have not yet reached the point where General AI is available, and we can depend on it as we would on another human being, side by side. Narrow AI tools need careful management and oversight.
Some examples of proper usage for Customer Experience, following our previous analysis, would be:
For Strategic level Customer Experience, an AI tool can help us to collect and summarize information we require to analyze, saving us lots of time rummaging through the Internet and local documents. Products like Microsoft Copilot already have shown what can be achieved with this.
It is also useful for strategy to use an AI tool to build analysis charts, like SWOT and PESTEL, based on the information available for the GPT model. The result probably won’t be perfect, but it is a great starting point, and it can even unearth trends that weren’t easily detectable by a human analyst (data mining style).
It can be used to produce content, presentations, even publishable articles, all of them based on the information fed to the model. That’s an enormous time saver, that allows to dedicate more time to the real strategic work.
At the Operational level, it can help craft action plans, based on the strategic documents already fed into the model, and update them based on changes that might happen over time. This greatly reduces the overhead of project management, and can even allow a project manager to oversee more projects with ease.
It can also help Customer Support Representatives, both on the phone, chat and email, to find solutions to problems faster. It is the representative’s job to make sure the solution proposed is fit for the situation, but the average amount of time saved will be enormous.
For some self-support tasks, the use of a GPT powered chat, properly trained with the information regarding the problem domain, can perform as a first level of service. If the customer feels that the information provided is not enough, they must be able to quickly transition to a human representative, to avoid the bad Customer Experience we already mentioned on the IVR and chatbot discussion. It could even be part of the intended workflow: save time from the human representatives by collecting all the necessary info and intended service with a GPT bot, before turning them over for the real execution.
Where are we headed?
Narrow AI based tools and solutions have been already with us for some time (Tesla’s self driving cars, Google Translate, Amazon Alexa, etc). GPT based tools, like ChatGPT and all of the derived products will be added to the mix, this time increasing the amount of work they can do for us. But, as we have discussed here, Narrow AI is not General AI. It cannot and won’t ever be able to “think”. It cannot know if something is factually right or wrong, cannot make moral evaluations, etc. Some people will misuse the new and shiny AI tools, and try to put the blame on them (remember IVRs). But some people, better informed, will apply AI the right way to the right problems. And thrive where others fail.
It is true the job market will be affected by AI: that’s inevitable. We are on the brink of the 4th Industrial Revolution. But you must learn to use AI to be more effective and efficient on the work you do. Then you’ll have an edge thanks to AI instead of being replaced by it. You must ask yourself: if you can really be 100% replaced by a Narrow AI tool, perhaps you have grown complacent with your skills, and need to rethink how to put your human skills, not available to any AI (not for now… until General AI comes), into use. Make of this an opportunity, not a threat.
Keep informed. Keep on top of the situation. Customer Experience is going to be transformed by AI, along with every other industry. So make an habit of learning about how to apply not only AI, but every technology, tool, strategy and methodology available to enhance your Customer Experience delivery. By reading this article you have given a first step on that direction. I encourage you to subscribe. It will give you access to a free webinar about how to make Customer Experience make more money for you. And you’ll also make sure you don’t miss any articles from me on this topic (I publish twice a week).
Let me know on the comments if this was useful to you, what would you like to be explained in more detail, or anything that comes to mind on this regard.
Cheers.
When you are ready, here is how I can help:
“Ready yourself for the Future” - Check my FREE instructional video (if you haven’t already)
If you think Artificial Intelligence, Cryptocurrencies, Robotics, etc. will cause businesses go belly up in the next few years… you are right.
Read my FREE guide “7 New Technologies that can wreck your Business like Netflix wrecked Blockbuster” and learn which technologies you must be prepared to adopt, or risk joining Blockbuster, Kodak and the horse carriage into the Ancient History books.
References
ACM-ICPC International Collegiate Programming Contests
Pause Giant AI Experiments: An Open Letter
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
The False Promise of ChatGPT
https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt- ai.html