We assign emotional intelligence to AI. It learns how we behave and reason. And then it takes over. It’s one of the oldest tales in science fiction. But with the advent of emotional AI and the astonishing speed with which the world of work is changing, is this overused trope morphing into reality? 

First, a disclaimer. Any debate about emotional artificial intelligence should be taken with a grain of salt because, despite decades of research, we still lack a universal definition of emotion, let alone an understanding of how biological brains work in detail. This already imposes a limitation on what AI can and cannot do. Second, whether or not AI can grasp human emotions is the wrong question.  

The right question, according to AI researchers Yasmina El Fassi and Sara Portell, is how rapidly companies will use emotional AI to deliver hyper-personalised human experiences. And that, it seems, is already happening in industries like customer service and human-centred design. 

By studying practical applications of AI in the workplace, we can infer that AI is set to revolutionise the human experience. Companies that leverage emotional AI to improve service delivery will be the ones that come out on top.  

What is emotional AI?

Emotional AI first cropped up as a concept in 1997, when researcher Rosalind Picard released her book “Affective Computing”, exploring how computing influences emotions. Her research sparked a wave of studies on the emotional interactions between humans and machines, leading to what we now call emotional artificial intelligence.  

Today, emotional AI refers to the ability of machines to recognise, interpret, and respond to human emotions, made possible by great leaps in machine learning and deep learning algorithms. Also known as affective computing, computers now have the power to analyse large volumes of data and extrapolate meaningful insights about human emotional states. 

Solutions consultant at Zendesk Yasmina El Fassi is writing her thesis on Emotional AI. She explains how, “emotional AI systems use a combination of both discriminate and generative AI. The discriminative model will classify facial expressions into emotional categories, then use the generative model to interact with the user and generate more emotional responses.”  

I don’t see [AI] as a replacement. I see it as a tool. As humans, we need tools to help us make better decisions.

Sara Portell

VP of UX @ Unit4

For example, advanced multimodal AI assistants like chatbots are increasingly being used to study customers’ emotions. Zendesk, a helpdesk software, uses an AI chatbot to classify incoming messages based on the customer’s intent and emotional tone, enabling agents to grasp the situation and prioritise tickets rapidly.  

Whether registering a customer’s frustration during a support call or picking up a buyer’s happiness by the way their voice pitches, emotional AI is transforming how we engage with technology. And affective computing is steadfastly growing, with over 72% of companies adopting AI-driven solutions in 2024—a 22% leap over the previous six years, reports McKinsey in its Global Survey on AI. Recent breakthroughs are only accelerating AI adoptions in the workplace, which begs the question: Is AI starting to mimic cognitive abilities?

Can AI grasp emotions?

At the heart of the debate around emotional AI is the idea of emotional intelligence.  What is emotion even? Put psychologists in a room together and they will quibble over the definition. There is no academic consensus on how emotions are expressed—or, for that matter— interpreted. Lisa Feldman Barret, a professor of psychology at Northeastern University, teamed up with other scientists in 2019 to determine whether emotions can be read by facial movements alone. They read and summarised more than 1,000 papers, concluding that you cannot.  

Ultimately, if we can assert that there is no baseline for defining emotion, how then, can we trust AI to grasp emotions? 

We cannot forget that language models have hallucinations.

Yasmina El Fassi

Solutions Consultant @ Zendesk

Sara Portell is the VP of User Experience at Unit4, an enterprise software company that uses AI to enhance its ERP applications. She holds a Master’s degree in Behavioural Science, and her research thesis focuses on leveraging AI to enhance employees’ psychological well-being.  

“Emotions are very complex. AI might be able to recognise some basic emotions, but when it comes to getting a nuanced understanding in different contexts and multicultural scenarios, we’re not there yet”, Sara says. “I don’t see [AI] as a replacement. I see it as a tool. As humans, we need tools to help us make better decisions. Now we can have a system that allows us to interpret emotions, understand our emotions, and personalise experiences accordingly.” 

While AI and large language models can predict logical patterns, they still have a long way to go in mimicking the depth of human emotions.  

AI’s great leap forward

While we currently have the upper hand on reading emotions, machines are gaining ground. In ideally controlled conditions, AI software is astonishingly accurate, explains Yasmina. 

“If you have a blank background, are sitting in a certain way, and there are no obstructions like hair hiding in your face and body or doing a movement that will hide your face, then with a standardised data set in the best scenario, it will work out.” 

Indeed, machines excel at analysing large sets of data and, with the correct input, can interpret images and register subtleties in micro-expressions that a human might not necessarily be able to pick up as fast. Machines trained to speak that language, with prompt engineering, can create more effective human experiences. 

“We’re entering a world where we might have agents [acting like] Copilots that can be integrated into different devices or applications,” explains Sara. “Emotional AI is very important here because we’re going to be co-creating with technology, co-managing employees, and co-completing tasks with the machine. It’s important for us to have a better understanding of each other to become more efficient.” 

Friend or foe?

While the debate around AI rages on, two polarising views remain: AI is either going to solve all our problems, or it’s going to take over our jobs and our lives. Which is it? 

According to Yasmina, like with every technology, you have early adopters who champion its effectiveness. From boosting productivity to having your own personal assistant that guesses your emotional state and responds accordingly, many proponents think it’s time to accept emotional AI as part of our future.  

For Sara, “it boils down to trust. Can users trust this technology? How reliable is it? Is it able to do what it is meant to do consistently? Is it really tracking [our emotions]? Is it understanding what we are talking about?” 

When it comes to the reliability of AI, studies have showcased biases in the technology, which can lead to harmful outcomes like discrimination and racism if deployed in areas like recruitment, performance evaluations, and policing. 

You will always need a human in the loop to verify the answers. That’s very important when dealing with customers. You can build a relationship for ten years and end it with one wrong message.

Yasmina El Fassi

Solutions Consultant @ Zendesk

A few years ago, Sara worked for an innovation lab that used different types of biometrics to collect emotional data. One was facial EMG (facial electromyography), which studies facial muscle movements.  When tested on UK and US participants, the data was about 70% to 80% accurate.  But when Asian participants were introduced into the fold, the data dropped so much that it was deemed unreliable. What this essentially illustrates is that you can run a study that reveals an average of emotional truth from large data sets, but if you focus on one person or a minority group, for instance, the study won’t possess an average.  

Along with perpetuating biases, large language models also tend to fabricate information. Yasmina comments on this trend. “Google recently purchased data from Reddit. It’s generating crazy responses”, she says. “A user on Google recently asked, ‘Should pregnant women smoke?’ and Gemini replied saying it’s recommended to smoke 2 to 3 cigarettes a day because it had data from Reddit and blogs. We cannot forget that language models have hallucinations.” 

Yet, despite its limitations, the technology is showing immense promise. One such field is psychotherapy, with emotional intelligence software enabling therapists to keep track of their patients’ facial expressions and reactions to certain topics. This, in turn, has helped therapists determine treatment progress, adjust direction if needed, and monitor alignment between therapist and patient.  

The customer service industry is another field where emotional AI is showing positive signals. 

How emotional AI creates hyper-personalised customer experiences 

Among the industries most impacted by generative AI is customer experience (CX). According to Zendesk, 70% of CX leaders say that generative AI has led their organisations to take a step back and re-evaluate their customer experience. What’s more, customers are ready for hyper-personalised experiences, with over 75% of consumers citing their enthusiasm for AI-driven solutions. 

With leaders on the prowl for new adoptions, one thing is clear: emotional AI can provide more customised and satisfying experiences for customers. 

One example of this is bots powered by emotional AI. These types of bots are based on machine learning algorithms, deep learning and sentiment analysis, distinctly differentiating themselves from regular bots. They are designed to detect, interpret, and respond to customer emotions, subsequently tailoring their responses depending on the emotional signals they interpret from users.  

When regulated and deployed cautiously, AI chatbots can manage 30% of live chat communications and 80% of routine tasks, freeing up customer support agents’ time to focus on priority cases as well as solving more creative and complex tasks.   

However, according to Yasmina, emotion-driven chatbots will never replace humans. “You will always need a human in the loop”, the researcher believes. “You will need to verify the answers and see if there are no hallucinations because that’s very important when dealing with customers. You can build a relationship for ten years and end it with one wrong message.” 

Another caution to take stock of is that, since emotions are highly context-specific, they cannot be accurately read yet—by a human or AI. Coupled with ethical concerns around how data is used for commercial intent, it becomes challenging to win over customers.   

Like with any technology, responsible applications are key. And the best way to do this? Focus on the user experience.  

The UX of AI

If you go to Netflix’s Terms and Conditions, there will be no mention of how emotional data is gathered. However, in the settings, the streaming platform mentions user testing, referring to the emotion analytics they gather while users watch content or browse the app. Now, if you switch off the feature, your experience will immediately become generic and bland. Netflix won’t remember you or your preferences anymore, so your experience with the app will lack personalisation. 

According to Sara, two levels of research are key to AI. One is using technology to understand emotions and to design better experiences. The other is real-time emotion analytics inserted into the experience and the product to help you automatically personalise an experience. Both are crucial for enhancing the user experience.  

When working with UX, researchers form an integral part of the process. 

AI and UX form a powerful combination where you can truly improve the experience and build a meaningful and valuable product or service to solve that need.

Sara Portell

VP of Ux @ Unit4

“The researcher is the expert,” Sara notes. “You have a ton of data you need to make sense of. How do you interpret it? Researchers cannot go away; otherwise, it would be really difficult to interpret and use that data in meaningful ways.” 

But it’s not just about interpreting data. You also need humans to translate data into action plans. What comes after the research requires skills like critical thinking and problem-solving.  

This is where human-centred design plays a critical role in enhancing AI’s effectiveness and, ultimately, its reliability.  

Why human-centred AI design holds all the cards

Human-centred AI design emphasises the human. It focuses on solving problems from a user’s perspective by studying their issues, pains, and frustrations and then responding with customised, smart solutions to enhance human experiences and interactions. That’s the very essence of human-centric AI design, one that answers a real user need. 

“AI and UX form a powerful combination where you can truly improve the experience and build a meaningful and valuable product or service to solve that need”, Sara notes. “It’s a technology you will be using to solve a need, and humans need to be part of the process from the start to understand that problem and be able to design that service throughout the entire process,” notes Sara. 

The role of empathy in human-centred design is to step into other people’s shoes, to understand what makes them tick, and to start solving problems from their perspectives. It shifts the focus away from people’s interactions with AI agents, focusing instead on the interaction between people and their context. 

Ultimately, emotion design opens a floodgate of opportunity, enabling researchers and designers to study user emotions and elicit and trigger emotions in that said experience. When you pair this with AI, you come up with powerful experiences for end-users. 

The future can be bright for emotional AI

From Hollywood depictions of humans falling in love with AI-driven computer operating systems to full-blown scare-mongering movies, the emotional aspect of AI has taken a life of its own in pop culture.  

In reality, AI may never be able to detect a human’s emotions with 100% accuracy nor provide bias-free interpretations since humans are highly subjective beings with nuanced experiences. This only strengthens the argument that AI technology should be applied responsibly, with human-centric AI design paving the way. Lead by using AI with humanity and, AI might just grasp human emotions.