Improving robustness against reverberation for automatic speech recognition

, , ,

Citation

V. Mitra et al., “Improving robustness against reverberation for automatic speech recognition,” in Proc. 2015 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2015), p. 525.

Abstract

Reverberation is a phenomenon observed in almost all enclosed environments. Human listeners rarely experience problems in comprehending speech in reverberant environments, but automatic speech recognition (ASR) systems often suffer increased error rates under such conditions.  In this work, we explore the role of robust acoustic features motivated by human speech perception studies, for building ASR systems robust to reverberation effects.  Using the dataset distributed for the “Automatic Speech Recognition In Reverberant Environments” (ASpIRE-2015) challenge organized by IARPA, we explore Gaussian mixture models (GMMs), deep neural nets (DNNs) and convolutional deep neural networks (CDNN) as candidate acoustic models for recognizing continuous speech in reverberant environments.  We demonstrate that DNN-based systems trained with robust features offer significant reduction in word error rates (WERs) compared to systems trained with baseline mel-filterbank features.  We present a novel time-frequency convolution neural net (TFCNN) framework that performs convolution on the feature space across both the time and frequency scales, which we found to consistently outperform the CDNN systems for all feature sets across all testing conditions.  Finally, we show that further WER reduction is achievable through system fusion of n-best lists from multiple systems.

Index Terms— time-frequency convolution nets, deep convolution networks, robust feature combination, robust speech recognition, reverberation robustness, system fusion.


Read more from SRI