Crowdsourcing Emotional Speech | SRI International

Toggle Menu

Crowdsourcing Emotional Speech

April, 2018
Publisher Name: 
International Conference on Acoustics, Speech and Signal Processing, 2018 IEEE
Citation 

J. Smith, A. Tsiartas, V. Wagner, E. Shriberg and N. Bassiou, “Crowdsourcing emotional speech,” 2018 IEEE 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Calgary, Alberta, Canada, 2018, pp. 5139-5143. 

Abstract 

We describe the methodology for the collection and annotation of a large corpus of emotional speech data through crowdsourcing. The corpus offers 187 hours of data from 2,965 subjects. Data includes non-emotional recordings from each subject as well as recordings for five emotions: angry, happy-low-arousal, happy-high-arousal, neutral, and sad. The data consist of spontaneous speech elicited from subjects via a web-based tool. Subjects used their own personal recording equipment, resulting in a data set that contains variation in room acoustics, microphone, etc. This offers the advantage of matching the type of variation one would expect to see when exposing speech technology in the wild in a web-based environment. The annotation scheme covers the quality of emotion expressed through the tone of voice and what was said, along with common audioquality issues. We discuss lessons learned in the process of the creation of this corpus.

Conference Paper
Search Publications
Browse by Sectors
Archive
E.g., 2018-07-20
E.g., 2018-07-20
Author