Part two: can we trust emotional AI as it moves into production?

emotional-AI-girl-holding-hands-with-robot

Part One of this article explored the concept of Emotional AI and Affected Computers. The idea of a computer being used to pick out the subtleties of human emotions and use these to build innovative products, was placed on the table as food for thought.

In this second part of the discussion, we look at the more practical side of the discipline. What areas are opening the door for Emotional AI to enter? And, what are the downsides, if any, in terms of privacy and trust?

Use cases in Emotional AI

Being able to interpret and simulate human emotions is a defining moment in AI development. If a computer can analyze a cognitive state, like happiness, pain, or anxiety, it opens up use cases normally closed to computing. The first Emotional AI opportunities lie in the following areas:

soldier-talking-to-medical-professional

Healthcare

Post-Traumatic Stress Disorder (PTSD) is a debilitating mental health disorder. It severely impairs a sufferer’s life. The NYU School of Medicine along with SRI researchers have developed a tool that diagnoses post-traumatic stress disorder through voice analysis. The tool has an 89% success rate in analyzing and identifying voice patterns of sufferers. The tool can determine emotion, cognition, and mental health, from the voice of the speaker, is expected to be available in clinics in the near future. A seminal paper “Speech‐based markers for posttraumatic stress disorder in US veterans” developed by NYU Langone and SRI International’s STAR Lab and written by Dimitra Vergeyi, Andreas Tsiartas, et.al., describes the use of the statistical method of “random forests”. This method is used to create a Machine Learning algorithm applied to classify speech markers that can accurately predict the disorder in a patient.

Education

Kid-with-head-in-hands-while-parent-looks-on.

Children with autism, typically struggle with non-verbal communication. Affective computing can be used to help children on the autism spectrum to better understand non-verbal elements of communication. The paper CultureNet: A Deep Learning Approach for Engagement Intensity Estimation from Face Images of Children with Autism documents how Deep Learning models can be used in conjunction with robots to improve ‘engagement estimation accuracy’. The research was performed across two cultures. Importantly, this bi-cultural analysis, showed that training based on one culture had limited cross-over with another, the paper concludes the “importance of having access to data of the target culture and children when building deep models for engagement estimation”

Emotional AI and privacy?

In the U.S., Pew Research found that 81% of adults believe they have little control over the data collected about them by companies. At the same time, The Internet Society found that 63% of people found connected devices “creepy” and 75% distrust how data is shared. Privacy is an essential design goal of any technology that potentially uses personal data — this includes Emotional AI-based systems.

The types of personal data used in Emotional AI scans a wide range of data types, including behavioral and biometric. The design and development of Emotional AI devices will require large amounts of personal data for training purposes as well as in use. The design of these systems must be done using the principles of Privacy by Design (PbD) as established by ex-Privacy Commissioner for Ontario, Ann Cavoukian, alongside the principles of Design for Trust.

Design for Trust (DfT)

The methodology behind Design for Trust sets out principles that guide system designers and developers in building trust into a solution. The principles are based on user expectations of what trust is and how it is represented in the world of technology. DfT places focus on transparency and an understanding of the mission behind the use of AI in a device.

In terms of the training of Emotional AI algorithms, the key takeaway is that developers need to remember that the quality of data is more important than quantity. In terms of who is responsible for trustworthiness as Emotional AI progresses, a panel at AI World 2019, led by Karen Meyers, of SRI International’s Artificial Intelligence Center, concluded that “benchmarks of trustworthiness” are needed to inform the consumer.

No matter what type of personal data or the amount being processed, privacy and trust should be a design remit. All information collected from users should be done in a privacy-respectful manner.

Privacy-preserving machine learning is a key element of AI and can resolve many of the privacy issues that can fall out of systems based on Emotional AI. This is a rapidly evolving field that is beyond the scope of this article, however, a comprehensive overview can be found in the paper, “Privacy preservation techniques in big data analytics: a survey.”

Playing with emotions

Emotional AI and Affective Computing open a new era in the human-computer interface. One that will take user interactions with computers to new heights of achievement.

This emerging branch of computing is unique because of the potential for a positive influence on society. But these fields are evolving at a rapid pace. This makes it impractical for developers and businesses to rely on regulations to guide their use of these technologies.

Businesses and developers must ensure that they are engineering solutions with the user’s well-being as a top priority. Failing to do so, leads to user distrust and ultimately low adoption of the offering.

If we design Emotional AI with this in mind, there is massive potential to upgrade computing and take us into a brave new world of emotion.


Read more from SRI