AI might not be able to become sentient (or at least not yet), but it can understand, recognise and ultimately mimic human emotions.

This is known as Emotion AI – a type of AI that can recognise, interpret and respond to emotions. It does this by analysing facial expressions, tone of voice, body language and text to understand how someone is feeling. It might sound a little ominous at first, but it actually has a lot of potential in gaining a deeper understanding of human psychology.

For example, AI startups that operate in healthcare could leverage this technology to get further insights into health conditions, improve patient engagement and effectively analyse an individual’s emotions, such as depression or anxiety, through apps and other digital platforms.

However, concerns over data privacy, bias and inaccuracies have been raised, which can cause severe harm to individuals while subjecting businesses to serious legal problems if not operated and supervised properly.

In this article, we’ll explore the ins and outs of Emotion AI, how it is used and ethical considerations companies should be aware of. 

Core technologies of Emotion AI

There are three key types of Emotion AI, these include: text emotion, voice emotion and video emotion.

For text, this could be detecting emotion from written messages, such as online comments, social media posts or emojis. For example, a study carried out by The University of Auckland used Emotion AI to see if it could distinguish emotions based on tweets from X (formerly Twitter). Using a “transformer transfer learning” model, a highly sophisticated AI algorithm that’s used in natural language processing, the study revealed an 84% accuracy rate in detecting feelings and sentiments from text.

A similar practice could also be applied to voice analysis, such as evaluating customer service calls to determine the vocal patterns and content of the conversation. According to a study reported by International Finance, 35% of organisations are using voice-processing AI. Meanwhile, voice recognition technology is expected to reach $50 billion by 2029, while AI voice generators are predicted to hit $4.9 billion by 2032.

And finally, Emotion AI can be used to determine emotions from videos by analysing facial expressions, voice patterns and body language. As of 2023, the most advanced facial emotion recognition (FER) software can achieve a 75-80% accuracy rate – a close match to the human ability to detect feelings, which is at around 90%.

Applications of Emotion AI

As of September 2024, around 1 in 7 (15%) UK businesses have adopted some form of AI technology. What’s more, there are now over 3,000 AI companies in the country, generating more than £10 million in revenue and employing over 60,000 people in AI-related roles.

The UK government has caught on to AI’s potential, having introduced its AI Opportunities Action Plan earlier this year. As part of this new proposal, these promises include investing in a new supercomputer to aid AI research and development (R&D), increasing the number of AI research talents to tackle the digital skills shortage and building AI-focused “growth zones”.

As for Emotion AI specifically, different sectors can benefit from its technology, particularly healthcare, marketing and customer service.

Emotion AI in healthcare

For the healthcare industry, Emotion AI can be advantageous in treating mental health.

Specifically, Emotion AI can be used to analyse voice tones, speech patterns and facial recognition to determine a patient’s state of mind. This way, therapists and other mental health professionals can get a better understanding of a patient’s mental wellbeing, resulting in more accurate treatments and more personalised care. It can also be used to monitor therapy sessions by tracking a patient’s progress, identify potential emotional distress and improve therapy sessions by ensuring emotions are accurately collected.

Case study: Cogs AI

Cogs AI is a mental health support app specifically for neurodivergent people. Having been featured on this year’s Startups 100 for 2025 Index, it utilises Emotion AI by analysing text and voice notes to help users understand their feelings and stop unhelpful thoughts. Thanks to its neurodivergent-specific features, patients who used the app reported a 30% improvement in their mental health after just two weeks of using it.

Emotion AI in marketing

Emotion and storytelling have always been strong elements of effective marketing, but with Emotion AI, companies can tap into the subconscious behaviours of consumers. Marketers can personalise their messages by analysing emotional data.

For example, if a customer is consistently expressing positive emotions for a specific line of clothing products, a business can advertise personalised style recommendations that align with their preferences. This in turn builds a stronger emotional connection as the customer feels more understood.

Case study: McDonald’s Portugal

Emotion AI also offers real-time insights into customer reactions, allowing brands to adapt their marketing strategies in real time.

McDonald’s Portugal leveraged this through its “Mood de Mac” campaign. Through MorphCast Facial Emotion AI, the company was able to offer personalised content and marketing promotions based on real-time emotional analysis.

Emotion AI in customer service

When interacting with customers, personalisation is key, and Emotion AI fits perfectly. By analysing tone, language and context, businesses can use Emotion AI to deliver more empathetic and relevant responses to customer queries or complaints. 

Moreover, similar to how it works with marketing practices, companies can respond to customer feedback in real time, enabling immediate issue resolution. They can also leverage emotional data to predict customer behaviour, preferences and potential problems, in turn proactively addressing customer needs and improving satisfaction.

Case study: PolyAI

Chatbot functions are useful for answering frequently asked questions and offer support 24/7. The only problem is that they often miss the “human” side of interactions and more often than not, aren’t able to answer more complex questions. This can be reflected by the fact that 78% of UK customers still prefer to speak to a person to handle their queries, even if there are longer waiting times on the phone.

PolyAI aims to tackle this through its AI voice assistants. Available 24/7, the company uses sophisticated machine learning to generate genuine reactions within a conversation. A truly impressive entry from our Startups 100 Index, PolyAI’s solution has reduced call volumes by 50% and has achieved an 85% customer satisfaction (CSAT) score.

Ethical considerations

While Emotion AI has evidently proved to be beneficial in better understanding human emotions, there have also been concerns over privacy, bias and the use of sensitive emotional data.

Employee monitoring

For example, telecommunications company BT trialled a new monitoring software last year, which continuously filmed employees at their desks and tracked their emotions in real time. At the time, the company claimed that it was to improve cybersecurity measures, but it risks employees feeling self-conscious and worried about their feelings being wrongly identified.

Moreover, other businesses have faced backlash for using monitoring software to track staff performance and office attendance. Most notably, accountancy company PwC announced in September 2024 that it would start tracking employees to ensure they were adhering to the company’s return to office (RTO) policy, which requires staff to be in the office at least three days a week. Barclays Bank was also criticised for monitoring employee productivity, including flagging toilet visits as “unaccounted activity”.

Handling sensitive data

As Emotion AI primarily relies on emotional data, this inevitably raises privacy concerns, particularly in regions with strict regulations, such as the General Data Protection Regulation (GDPR) in the UK and Europe. Therefore, companies have to ensure they get full consent from individuals before collecting and processing their emotional data.

Additionally, there is a risk of emotional data being misused or falling into the wrong hands, leading to potential discrimination or unethical targeting. For example, if a business uses Emotion AI for hiring decisions, candidates displaying stress or anxiety might be unfairly disadvantaged. Similarly, advertisers could exploit emotional insights to manipulate consumers into making purchases that they might not have made otherwise.

Bias and ethical use

Another significant ethical concern is the risk of bias in Emotion AI systems. These models are often trained on datasets that may not represent the full spectrum of human diversity, which could lead to inaccurate or discriminatory results. 

According to a study reported by The Guardian, Emotion AI has been shown to disproportionately feature negative emotions on the faces of black people, which could risk discrimination if used in areas like the recruitment process, medical diagnostics or performance reviews.

Building Emotion AI solutions

Emotion AI can be an exciting avenue for creativity and innovation, as it offers new ways to understand and respond to human emotions. However, it also raises important ethical questions that cannot be ignored and could lead to serious consequences if not handled correctly.

As the technology continues to evolve, founders and developers must take responsibility to ensure that their solutions are used ethically – respecting privacy, reducing bias and prioritising transparency. By doing so, they can harness the benefits of Emotion AI while minimising the risks and safeguarding individual rights.



Source link

Share.
Leave A Reply

© 2025 The News Times UK. Designed and Owned by The News Times UK.
Exit mobile version