Don’t tell me I am being emotional!
This article was originally inspired by Natalie Sheard’s article Can […]
Share This Article:
This article was originally inspired by Natalie Sheard’s article Can AI recognise human emotions? The science doesn’t stack up.
The debate on whether AI can eventually understand the human soul will continue over years. Our daily lives revolve around human interactions and engaging with other people requires us to understand how they are feeling. To differing levels (and differing levels of success), humans constantly gauge each other’s emotions to get a smooth interaction. Imagine if we could have a tool to assist us with this. The power of AI in processing information to assist in tasks – even taking over some of them for us – is clear and well known. But can we use AI for everything? Will AI be able to help us recognise human emotions? And should we even try?
Artificial Intelligence (AI) has made remarkable strides in various fields, but its journey into emotion recognition technology raises significant ethical concerns.
Some in the field, such as this article from the University of Auckland (AI can detect emotions in text according to new research – The University of Auckland) claim that AI can accurately detect human emotions, which in itself is paradoxically odd. Have we lowered the human emotion to a technical set of calculations that is based on ones and zeros, which itself implies that no grey areas are accepted? Human behaviour is so diverse, colourful and complex, it cannot be simplified against a set of even complex algorithms. The science behind these assertions is increasingly being called into question, including in the above mentioned University of Auckland article that mentions the technology’s limitations and ethical concerns with its calculations and uses.
AI emotion recognition systems typically analyse words used, facial expressions, voice tone, skin flush, and heart rate to infer emotional states. However, this approach oversimplifies the complex nature of human emotions and fails to account for numerous factors that influence emotional expression and feelings at the core of an emotion. As with new technologies, it needs to be rigorously examined and tested under all conditions to understand any limitations through evidence-based findings. This allows for appropriate and informed establishment of ethical, safety and risk guardrails for technology usage. The speed at which AI is being developed and pushed to the consumer, either knowingly or not, often precludes the ability to undertake this level of analysis and understanding.
The Pitfalls of Emotion Recognition AI and Oversimplification of Human Complexity
Human emotions are highly complex and context-dependent. A person’s facial expression or voice tone can be influenced by a myriad of factors unrelated to their emotional state, such as pain, underlying medical conditions, or even simple physical discomfort. Emotional state can also change based on the context of who and what they are communicating. Think of how a parent may communicate the same message differently to a group of children as compared to their own child. AI systems, relying on limited criteria, often fail to capture this nuance. An interesting consideration would be the energy and environmental impact required for technology to gather and store all these attributes, and the processing required for an AI system to ingest and analyse these complexities.
Cultural and Individual Differences
Emotional expression varies significantly across cultures, genders, ages, and individuals.
An AI system trained on a limited dataset may misinterpret emotions expressed by people from different backgrounds or those with neurodiversity. Even datasets considered ‘large’ are biased towards containing more data reflective of countries where populations have greater access to technologies in the first place. The other aspect is when a human is being interrogated, without evidence and relying heavily on AI emotion recognition, the guilty and innocent are being judged against an imperfect standard and the result could potentially be a wrongful conviction.
The Missing ‘Why’
Perhaps the most critical flaw in AI emotion recognition is its inability to understand the underlying reasons for an emotion. Knowing that someone is annoyed is far less valuable than understanding why they feel that way.
Ethical Implications
The technology itself is not inherently bad, dangerous or otherwise. Similar to how someone elects to use an axe, it is the use of the technology that can create issues. The deployment of emotion recognition technology, particularly in sensitive areas such as the workplace, education, medical, law enforcement or social services – among others – raises serious ethical concerns:
- Privacy Violations: These systems often collect and process highly sensitive personal data, potentially infringing on individuals’ privacy rights. The recently passed Privacy and Other Legislation Amendment Bill (2024) includes within the definition of Personal Information, information that can collectively be used to establish patterns that identify an individual.
- Manipulation Risks: There’s a real danger that this technology could be used to manipulate people’s emotions or decision-making processes. Corporations already use sales techniques drawn from psychological research to manipulate consumers. With the addition of AI emotion recognition to tailor the corporation’s responses, this is further amplified.
- Bias and Discrimination: AI systems trained on limited datasets may perpetuate biases, leading to unfair treatment of certain groups. As previously mentioned, even global, large datasets will only be able to draw their data from populations that have access to technologies. Cultural differences in the expression of emotion, along with the reasons for expressing emotion, are so varied that there is a huge gap that would amplify bias and discrimination to groups who are already disadvantaged by a lack of access to technology.
- False Conclusions: Inaccurate emotion recognition could lead to misdiagnoses in healthcare settings or unfair treatment in workplaces or law enforcement settings.
- Accuracy and reliability: Emotional expression is highly subjective and can alter based on what is happening at the current time., It can also vary culturally, leading to potential biases and misinterpretations by algorithms restricted in what attributes it is able to analyse against.
- Psychological impact: Constant emotional monitoring, particularly in workplace settings, could lead to feelings of surveillance and negatively impact mental health and autonomy.
Regulatory Response
Recognising these risks, the European Union (EU) has taken steps to regulate the use of emotion recognition AI. The EU AI Act bans the use of these systems in workplaces, schools, and other sensitive settings, except for specific medical or safety reasons. However, many countries, including Australia, lag behind in specific regulations and practical, pragmatic and understandable advice and actions for industry and the public in relation to these technologies.
This is not to say that Australia is completely lacking. Australia has various standards and guidance at state and federal levels that include:
- Australia’s AI Ethics Principles – Australia’s AI Ethics Principles | Australia’s Artificial Intelligence Ethics Principles | Department of Industry Science and Resources
- Voluntary AI Safety Standard – Voluntary AI Safety Standard | Department of Industry Science and Resources
- WA Government AI Policy and Assurance Framework – WA Government Artificial Intelligence Policy and Assurance Framework
The Department of Industry, Science and Resources have proposed mandatory guardrails for AI in high-risk settings: Introducing mandatory guardrails for AI in high-risk settings: proposals paper – Consult hub.
These are a step forward for Australia, but there are huge gaps in the guardrails and holding companies to these. Providing awareness for the consumer to make informed decisions when AI is used directly, such as someone actively using Generative AI, or indirectly, such as tailored sales pitches, or its use to assess and order individuals in a queue or ranking. The maturity of the regulatory and advice space in Australia still requires a deep level of understanding of the technology, and the correct business process and information management questions to ask in order to have some semblance of safe, ethical and risk managed use of the technology. Especially more so when using AI in areas that are more nuanced and have greater consequences, such as emotion recognition.
The Way Forward
While AI has shown promise in many areas, emotion recognition remains a field where its application is fraught with danger and risk to an individual. The complexity of human emotion, combined with the vast array of factors that influence their expression, makes it challenging for AI to accurately interpret emotional states. Rather than rushing to implement these technologies, we should examine our own motivations to use it. We must consider what we may need in place to keep ourselves and others safe, and who we can trust to help us engage in the experience that achieves the outcomes we seek, whilst minimising and managing residual risks.
Specifically for emotion recognition AI, more research is needed to understand the limitations, metrics and guardrails needed. In the meantime, it’s crucial to develop robust regulatory frameworks and public awareness to protect individuals from the potential misuse of these systems. Ultimately, the goal should not be to replace human judgment in interpreting emotions, but to find ways for AI to complement and enhance our understanding of human behaviour. As we navigate this complex landscape, ethical considerations must remain at the forefront, ensuring that technological advancements do not come at the cost of human dignity and privacy.
Anchoram Consulting submitted a response to the government’s ‘Introducing mandatory guardrails for AI in high-risk settings’ proposals paper focussing on the ethical and safe application of AI stance.
Karen Geappen also prepared an independent submission response from the perspective that there is currently insufficient definition and guardrails for the protection of children/youth, especially when factoring in bias and the potential for personal data to be perpetually embedded into the AI models.
This article was originally inspired by Natalie Sheard’s article Can […]
Share This Article:
Categories
Subscribe
Subscribe to our newsletter and get the latest news and information from Anchoram.