Time-frequency convolutional networks for robust speech recognition

Citation

V. Mitra and H. Franco, “Time-frequency convolutional networks for robust speech recognition,” in Proc. 2015 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2015), p. 317.

Abstract

Convolutional deep neural networks (CDNNs) have consistently shown more robustness to noise and background contamination than traditional deep neural networks (DNNs).  For speech recognition, CDNNs apply their convolution filters across frequency, which helps to remove cross-spectral distortions and, to some extent, speaker-level variability stemming from vocal tract length differences.  Convolution across time has not been considered with much enthusiasm within the speech technology community.  This work presents a modified CDNN architecture that we call the time-frequency convolutional network (TFCNN), in which two parallel layers of convolution are performed on the input feature space: convolution across time and frequency, each using a different pooling layer. The feature maps obtained from the convolution layers are then combined and fed to a fully connected DNN.  Our experimental analysis on noise-, channel-, and reverberation-corrupted databases shows that TFCNNs demonstrate reduced speech recognition error rates compared to CDNNs whether using baseline mel-filterbank features or noiserobust acoustic features.


Read more from SRI