Computationally-Efficient Endpointing Features for Natural Spoken Interaction with Personal-Assistant Systems

Citation

Arsikere, H., Shriberg, E., & Ozertem, U. (2014, 4-9 May). Computationally-efficient endpointing features for natural spoken interaction with personal-assistant systems. Paper presented at the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’14), Florence, Italy.

Abstract

Current speech-input systems typically use a nonspeech threshold for end-of-utterance detection. While usually sufficient for short utterances, the approach can cut speakers off during pauses in more complex utterances. We elicit personal-assistant speech (reminders, calendar entries, messaging, search) using a recognizer with a dramatically increased endpoint threshold, and find frequent nonfinal pauses. A standard endpointer with a 500 ms threshold (latency) results in a 36% cutoff rate for this corpus. Based on the new data, we develop low-cost acoustic features to discriminate nonfinal from final pauses. Features capture periodicity, speaking rate, spectral constancy, duration/intensity, and pitch of prepausal speech – using no speech recognition, speaker or session information. Classification experiments yield 20% EER at a 100 ms latency, thereby reducing both cutoffs and latency compared with the threshold-only baseline. Additional results on computational cost, feature importance, and speaker differences are discussed.

Keywords: Speech, Speech recognition, Feature extraction, Standards, Market research, Databases, Modulation.


Read more from SRI