Artificial intelligence and empathy – Science fiction has often described artificial intelligence as something that could dominate or even enslave humans.
Today AI is an integrant part of our daily lives, when we use our phone’s vocal assistant, domotics devices, interact with chatbots and virtual assistants or receive customized usage suggestions based on algortithms.
Anyway, when we talk about AI empathy, or instead, possible bias, people still look suspiciously at artificial intelligence systems.
The Pegasystems survey on a panel of 6,000 consumers from different countries in the world such as North America, UK, Australia, Japan, Germany and France reports people’s common opinion and concerns about the topic “AI and empathy”.
Empathy is definitely not the main focus of most organizations’ customer care strategy as everyday we all receive tons of e-mails from brands that want to sell more without considering our real needs or interests.
Moreover, even companies that focus on personalized user experience consider customers in terms or “segments” or “categories” instead of “individuals”.
In Pegasystems survey 68% of respondents prefers to interact with a real person instead of a machine, but 40% agrees that AI has the potential to improve interactions with customer service.
AI offers the opportunity for organizations to be more empathetic when interacting with customers if the technology is properly set up and monitored.
One of the most common concern is that AI may replace human work and 35% of respondents think that technology can actually decrease the number of jobs opportunities.
We do not have to forget that automation involves recurring and time consuming activities that people do not really want to do. Let’s think about Google Translate or Netflix: how many people should be employed to translate every request in every single existing language or to analyze people preferences and produce movies’ suggestions?
Another concern involve “bias”. 54% of the panel think that cognitive bias may be transferred and may influence the way AI systems interact with certain categories or in certain situations. For this reason, the learning abilty of artificial intelligence and possible bias that may be produced on the basis of collected data and context of interaction has to be monitored and controlled by humans.
38% of respondents do not believe that artificial intelligence may really understand their preferences as a real person may do, but 30% is comfortable with companies that use AI systems to interact with customers.
Technology is getting more and more sophisticated and sometimes people cannot say if they are interacting with a human or a machine, just as Alan Turing predicted.
Artificial intelligence and empathy: ethic principles and transparency
As reported in Pegasystems research “empathy is not about humans versus AI; it’s about using the best of what both have to offer”.
The future of AI based decisioning is a combination of insights and data collected by technology, but ethically analyzed by humans.
For example, transparency is a criteria that can actually make the difference in the way it helps customers to understand how an AI system takes a certain decisions.
Empathetic technology does not mean providing machines with feelings, but using AI and ethical principles to determine the best action to take for customers’ satisfaction.
For example, setting up a conversational strategy based on empathy does not mean to provide the customers’ with our best offers, but take the best action for our customers, like taking a pause and listen to customers’ needs or just say “thanks”.
People are responsible for technology and they must recognize its potential, as well as its limits and weaknesses.
Final say, the only way to benefit of AI is to take control of it, prove its value through responsible applications and direct its power through improving outcomes.
Source:
https://www.pega.com/system/files/resources/2019-11/pega-ai-empathy-study.pdf
You may also like: