Feature Fusion for High-Accuracy Keyword Spotting

, ,

Citation

Mitra, V., Van Hout, J., Franco, H., Vergyri, D., Lei, Y., Graciarena, M., … & Zheng, J. (2014, May). Feature fusion for high-accuracy keyword spotting. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 7143-7147). IEEE.

Abstract

This paper assesses the role of robust acoustic features in spoken term detection (a.k.a keyword spotting—KWS) under heavily degraded channel and noise corrupted conditions.  A number of noise-robust acoustic features were used, both in isolation and in combination, to train large vocabulary continuous speech recognition (LVCSR) systems, with the resulting word lattices used for spoken term detection. Results indicate that the use of robust acoustic features improved KWS performance with respect to a highly optimized state-of-the art baseline system.  It has been shown that fusion of multiple systems improve KWS performance, however the number of systems that can be trained is constrained by the number of frontend features.  This work shows that given a number of frontend features it is possible to train several systems by using the frontend features by themselves along with different feature fusion techniques, which provides a richer set of individual systems.   Results from this work show that KWS performance can be improved compared to individual feature based systems when multiple features are fused with one another and even further when multiple such systems are combined.  Finally this work shows that fusion of fused and single feature bases systems provide significant improvement in KWS performance compared to fusion of single-feature based systems.


Read more from SRI