This study demonstrates that a speech-based algorithm can objectively differentiate PTSD cases from controls.
To address the challenge of mapping characteristics of individuals’ speech to information about the group, we coded behavioral and learning-related indicators of collaboration at the individual level.
We describe the methodology for the collection and annotation of a large corpus of emotional speech data through crowdsourcing.
Speech conveys many things beyond content, including aspects of stance and attitude that have not been much studied.
We predict HR from speech using the SRI BioFrustration Corpus.In contrast to previous studies we use continuous spontaneous speech as input.
This work investigates whether nonlexical information from speech can automatically predict the quality of small-group collaborations. Audio was collected from students as they collaborated in groups of three to solve math problems.
This study investigates whether automatic audio- based monitoring of interactions can predict collaboration quality.
This study compares the effect of noise and reverberation on depression prediction using standard mel-frequency cepstral coefficients, and features designed for noise robustness, damped oscillator cepstral coefficients.
Prediction of heart rate changes from speech features during interaction with a misbehaving dialog system
This study examines two questions: how do undesirable system responses affect people physiologically, and to what extent can we predict physiological changes from the speech signal alone?