• Skip to primary navigation
  • Skip to main content
SRI logo
  • About
    • Press room
    • Our history
  • Expertise
    • Advanced imaging systems
    • Artificial intelligence
    • Biomedical R&D services
    • Biomedical sciences
    • Computer vision
    • Cyber & formal methods
    • Education and learning
    • Innovation strategy and policy
    • National security
    • Ocean & space
    • Quantum
    • QED-C
    • Robotics, sensors & devices
    • Speech & natural language
    • Video test & measurement
  • Ventures
  • NSIC
  • Careers
  • Contact
  • 日本支社
Search
Close
Speech & natural language publications December 1, 2017

Tackling Unseen Acoustic Conditions in Query-by-Example Search Using Time and Frequency Convolution for Multilingual Deep Bottleneck Features

Horacio Franco, Dimitra Vergyri

Citation

Copy to clipboard


J. van Hout, V. Mitra, H. Franco, C. Bartels and D. Vergyri, “Tackling Unseen Acoustic Conditions in Query-by-Example Search Using Time and Frequency Convolution for Multilingual Deep Bottleneck Features,” in Proc. ASRU 2017, pp. 48-54, December 2017.

Abstract

Standard keyword spotting based on Automatic Speech Recognition (ASR) cannot be used on low- and no-resource languages due to lack of annotated data and/or linguistic resources.  In recent years, query-by-example (QbE) has emerged as an alternate way to enroll and find spoken queries in large audio corpora,  yet  mismatched  and  unseen  acoustic  conditions  remain a  difficult  challenge  given  the  lack  of  enrollment  data.  This paper revisits two neural network architectures developed for noise  and  channel-robust  ASR,  and  applies  them  to  building a  state-of-art  multilingual  QbE  system.   By  applying  convolution in time or frequency across the spectrum, those convolutional bottlenecks learn more discriminative deep bottleneck features.  In conjunction with dynamic time warping (DTW), these features enable robust QbE systems.  We use the MediaEval 2014 QUESST data to evaluate robustness against language  and  channel  mismatches,  and  add  several  levels  of  artificial  noise  to  the  data  to  evaluate  performance  in  degraded acoustic environments.  We also assess performance on an Air Traffic Control QbE task with more realistic and higher levels of distortion in the push-to-talk domain.

↓ Download

Share this

How can we help?

Once you hit send…

We’ll match your inquiry to the person who can best help you.

Expect a response within 48 hours.

Career call to action image

Make your own mark.

Search jobs

Our work

Case studies

Publications

Timeline of innovation

Areas of expertise

Institute

Leadership

Press room

Media inquiries

Compliance

Careers

Job listings

Contact

SRI Ventures

Our locations

Headquarters

333 Ravenswood Ave
Menlo Park, CA 94025 USA

+1 (650) 859-2000

Subscribe to our newsletter


日本支社
SRI International
  • Contact us
  • Privacy Policy
  • Cookies
  • DMCA
  • Copyright © 2022 SRI International