Tackling Unseen Acoustic Conditions in Query-by-Example Search Using Time and Frequency Convolution for Multilingual Deep Bottleneck Features

,

Citation

J. van Hout, V. Mitra, H. Franco, C. Bartels and D. Vergyri, “Tackling Unseen Acoustic Conditions in Query-by-Example Search Using Time and Frequency Convolution for Multilingual Deep Bottleneck Features,” in Proc. ASRU 2017, pp. 48-54, December 2017.

Abstract

Standard keyword spotting based on Automatic Speech Recognition (ASR) cannot be used on low- and no-resource languages due to lack of annotated data and/or linguistic resources.  In recent years, query-by-example (QbE) has emerged as an alternate way to enroll and find spoken queries in large audio corpora,  yet  mismatched  and  unseen  acoustic  conditions  remain a  difficult  challenge  given  the  lack  of  enrollment  data.  This paper revisits two neural network architectures developed for noise  and  channel-robust  ASR,  and  applies  them  to  building a  state-of-art  multilingual  QbE  system.   By  applying  convolution in time or frequency across the spectrum, those convolutional bottlenecks learn more discriminative deep bottleneck features.  In conjunction with dynamic time warping (DTW), these features enable robust QbE systems.  We use the MediaEval 2014 QUESST data to evaluate robustness against language  and  channel  mismatches,  and  add  several  levels  of  artificial  noise  to  the  data  to  evaluate  performance  in  degraded acoustic environments.  We also assess performance on an Air Traffic Control QbE task with more realistic and higher levels of distortion in the push-to-talk domain.


Read more from SRI