Chloe Duckworth: Using voice tech to decipher the emotion behind the words


The CEO and co-founder of Valence Vibrations is leveraging technology from SRI to help people better read the room.


Chloe Duckworth is CEO of Valence Vibrations, a technology company that creates artificial intelligence (AI) models that analyze tonal data to help users identify and classify emotions. The app, which is powered by comprehensive voice data from SRI’s Speech Technology and Research Laboratory (STAR Lab), helps users better interpret the emotions of their conversations.

Here, Duckworth discusses her journey to create her startup and how she and the team hope to help us better understand the emotions of conversation:

When I was an undergraduate computational neuroscience student at the University of Southern California, my dorm neighbor and AI researcher Shannon Brownlee and I decided to enter a hackathon held by Neosensory, a haptics company working to help deaf people learn to interpret sound by feeling vibration patterns.

During the hackathon, Shannon and I built an emotional subtitle app with haptic feedback that could help neurodivergent people and the people they interact with better communicate with each other. We started the company while I was a sophomore at USC. I graduated early, worked for a time at a neurotech company – which catalyzed my interest in entrepreneurship and in the industry of neurotechnology – and soon began working full-time on our startup with a mission to become the emotional subtitle of the internet.

Three years later, as the CEO and co-founder of Valence Vibrations, I’m working with SRI to develop products that use AI to deliver emotional feedback during real-time conversations. I learned about SRI from one of their other portfolio companies, Encounter AI — CEO Derrick Johnson worked with SRI’s Speech, Technology and Research (STAR) Lab to develop a conversational AI voice system to improve remote ordering at restaurants.

We started our company by crowdsourcing data using audio surveys from a representative sample of North American English speakers; now we also work with the STAR Lab to incorporate datasets that ensure we have access to a full vocal range of North American English speakers in the U.S. and Canada. SRI is augmenting our data in a pro-diversity way that has expanded and enriched our existing models, and we’re excited to leverage their expertise to make sure that our technology is as inclusive as possible.

Our first product, an app called Vibes, works by analyzing vocal tones, assigning them an emotional classification, then relaying those emotions to the user via colors and vibrational patterns transmitted on an Apple Watch. The app can identify seven emotions, including sadness, anger, and happiness. We just launched an enterprise API version to leverage this technology in corporate settings to help sales, customer support staff, and virtual teams better interpret how their teammates, customers, and research subjects are feeling during digital conversations.

Developing this company has been an exciting journey. There’s a problem in the field of neurodiversity right now: a lot of the therapies and technologies being designed to improve communication for neurodivergent people are aimed at changing their unique and fundamental communication styles in ways that are unnatural for them and can be detrimental to their mental health. The goal of our products is to allow people to show up as they are, and to be able to understand and respect each other without the need for anyone to adapt to normative standards of communication.

Our team began our work focusing primarily on the needs of the autistic and attention-deficit/hyperactivity disorder (ADHD) communities, but working now with SRI, we can’t wait to optimize our models and improve communication across many demographics of people. Our eventual goal is to see emotional subtitles integrated across the Internet, on video conferencing platforms, video streaming platforms, and on social media.

One of our company’s lightbulb moments has been the realization that the issues we’re dealing with are not deficits that only exist within a few people. Rather, emotional perception is a communication challenge that affects everyone, across cultures, languages, and neurotypes, because communication must be mutually intelligible. Sometimes that requires accommodation, but often it’s the person who is the most underrepresented in the room who has the highest burden of changing themselves. We need to make sure we’re leveling the playing field from all sides.


Read more from SRI