Conversational AI's empathetic responses are 'deeply morally problematic', AHC philosopher argues
Philosopher makes case against empathetic conversational AI
Emotional regulation has no place in conversational artificial intelligence (AI) and the technology’s empathetic responses are “deeply morally problematic”, a researcher in the Faculty of Arts, Humanities & Cultures has argued.
Recent work in conversational AI has focused on generating empathetic responses to users’ emotional states to increase or maintain engagement and rapport with the user and to simulate intelligence, and much has been made of the therapeutic potential of such technologies.
But Dr Alba Curry, Lecturer in Philosophy in the School of Philosophy, Religion & History of Science, believes emotional regulation can have negative consequences for users and society.
In a paper published in the prestigious Findings of the Association for Computational Linguistics with co-investigator Dr Amanda Cercas Curry (MilaNLP, Department of Computing Sciences, Bocconi University), Alba said that AI-aided emotional regulation can have negative consequences for users and society, tending towards a “one-noted happiness defined as only the absence of ‘negative’ emotions.”
Emotions are an integral part of being human and they guide not only our understanding of the world but also our actions within it, so whether we soothe or flame an emotion is not inconsequential. While humans will necessarily show empathy for one another, conversational AI cannot understand the emotion and so cannot make an accurate judgement as to its reasonableness. This lack of understanding is key because we cannot predict the consequences of assuaging or aggravating an emotion, and a dialogue system cannot be held accountable for them.
The research forms part of a wider international and interdisciplinary collaboration on AI and Emotions with MilanNLP at the University of Bocconi, Milan. The project, called PeNLP (pronounced ‘Penelope’), aims to bring together philosophy of emotions and natural language processing (NLP).
This collaboration has also involved examining societal biases and stereotypes in emotion attribution in five state of-the-art large language models (LLMs). In what is the first study of its kind, Alba and her colleagues found strong evidence that all LLMs consistently exhibit gendered emotions and that these variations are influenced by gender stereotypes.
Alba said: “Our results raise questions about using LLMs for emotion-related natural language processing tasks and emphasise the importance of examining and improving LLMs’ fairness and inclusiveness. We advocate for more interdisciplinary collaboration to build upon prior research in this domain.”
The team has also investigated emotional analysis (EA) in natural language processing. In a review of over 150 Association for Computational Linguistics papers published between 2014 and 2022, Alba and her colleagues found demographic and cultural gaps, inadequate emotion category match to the downstream goal, no standard systematic nomenclature in EA, and no interdisciplinary research. For each gap, the team has proposed future directions to develop meaningful linkages, facilitate targeted study, and enable NLP emotion modelling for nuance.