• Skip to primary navigation
  • Skip to main content
SRI logo
  • About
    • Press room
  • Expertise
    • Advanced imaging systems
    • Artificial intelligence
    • Biomedical R&D services
    • Biomedical sciences
    • Computer vision
    • Cyber & formal methods
    • Education and learning
    • Innovation strategy and policy
    • National security
    • Ocean & space
    • Quantum
    • QED-C
    • Robotics, sensors & devices
    • Speech & natural language
    • Video test & measurement
  • Ventures
  • NSIC
  • Careers
  • Contact
  • 日本支社
Search
Close
Speech & natural language publications October 1, 2015 Article

Study of senone-based deep neural network approaches for spoken language recognition

SRI author: Mitchell McLaren

Citation

Copy to clipboard


L. Ferrer, Y. Lei, M. McLaren and N. Scheffer, “Study of senone-based deep neural network approaches for spoken language recognition,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. PP, Issue 99, pp. 1-1.

Abstract

This paper compares different approaches for using deep neural networks (DNNs) trained to predict senone posteriors for the task of spoken language recognition (SLR).  These approaches have recently been found to outperform various baseline systems on different datasets, but they have not yet been compared to each other or to a common baseline.  Two of these approaches use the DNNs to generate feature vectors which are then processed in different ways to predict the score of each language given a test sample.  The features are extracted either from a bottleneck layer in the DNN or from the output layer.  In the third approach, the standard i-vector extraction procedure is modified to use the senones as classes and the DNN to predict the zero-th order statistics. We compare these three approaches and conclude that the approach based on bottleneck features followed by i-vector modeling outperform the other two approaches.  We also show that score-level fusion of some of these approaches leads to gains over using a single approach for short-duration test samples.  Finally, we demonstrate that fusing systems that use DNNs trained with several languages leads to improvements in performance over the best single system, and we propose an adaptation procedure for DNNs trained with languages with less available data.  Overall, we show improvements between 40% and 70% relative to a state-of-the-art Gaussian mixture model (GMM) i-vector system on test durations from 3 seconds to 120 seconds on two significantly different tasks:  the NIST 2009 language recognition evaluation task and the DARPA RATS language identification task.

↓ Download

Share this

How can we help?

Once you hit send…

We’ll match your inquiry to the person who can best help you.

Expect a response within 48 hours.

Career call to action image

Make your own mark.

Search jobs

Our work

Case studies

Publications

Timeline of innovation

Areas of expertise

Institute

Leadership

Press room

Media inquiries

Compliance

Careers

Job listings

Contact

SRI Ventures

Our locations

Headquarters

333 Ravenswood Ave
Menlo Park, CA 94025 USA

+1 (650) 859-2000

Subscribe to our newsletter


日本支社
SRI International
  • Contact us
  • Privacy Policy
  • Cookies
  • DMCA
  • Copyright © 2022 SRI International