Study of senone-based deep neural network approaches for spoken language recognition

Citation

L. Ferrer, Y. Lei, M. McLaren and N. Scheffer, “Study of senone-based deep neural network approaches for spoken language recognition,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. PP, Issue 99, pp. 1-1.

Abstract

This paper compares different approaches for using deep neural networks (DNNs) trained to predict senone posteriors for the task of spoken language recognition (SLR).  These approaches have recently been found to outperform various baseline systems on different datasets, but they have not yet been compared to each other or to a common baseline.  Two of these approaches use the DNNs to generate feature vectors which are then processed in different ways to predict the score of each language given a test sample.  The features are extracted either from a bottleneck layer in the DNN or from the output layer.  In the third approach, the standard i-vector extraction procedure is modified to use the senones as classes and the DNN to predict the zero-th order statistics. We compare these three approaches and conclude that the approach based on bottleneck features followed by i-vector modeling outperform the other two approaches.  We also show that score-level fusion of some of these approaches leads to gains over using a single approach for short-duration test samples.  Finally, we demonstrate that fusing systems that use DNNs trained with several languages leads to improvements in performance over the best single system, and we propose an adaptation procedure for DNNs trained with languages with less available data.  Overall, we show improvements between 40% and 70% relative to a state-of-the-art Gaussian mixture model (GMM) i-vector system on test durations from 3 seconds to 120 seconds on two significantly different tasks:  the NIST 2009 language recognition evaluation task and the DARPA RATS language identification task.

Index Terms—Spoken Language Recognition, Deep Neural Networks, Senones


Read more from SRI