Extracting Spoken and Acoustic Concepts for Multimedia Event Detection

Citation

van Hout, J., Akbacak, M., Castan, D., Yeh, E., & Sanchez, M. (2013, 26-31 May). Extracting spoken and acoustic concepts for multimedia event detection. Paper presented at the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP ’13), Vancouver, WA.

Abstract

Because of the popularity of online videos, there has been much interest in recent years in audio processing for the improvement of online video search. In this paper, we explore using acoustic concepts and spoken concepts extracted via audio segmentation/recognition and speech recognition respectively for Multimedia Event Detection (MED). To extract spoken concepts, a segmenter trained on annotated data from user videos segments the audio into three classes: speech, music, and other sounds. The speech segments are passed to an Automatic Speech Recognition (ASR) engine, and words from the 1-best ASR output, as well as posterior-weighted word counts collected from ASR lattices, are used as features to an SVM based classifier. Acoustic concepts are extracted using the 3-gram lattice counts of two Acoustic Concept Recognition (ACR) systems trained on 7 broad classes. MED results are reported on a subset of the NIST 2011 TRECVID data. We find that spoken concepts using lattices yield a 15% relative improvement in Average Pmiss (APM) over 1-best based features. Further, the proposed spoken concepts gave a 30% relative gain in APM over the ACR-based MED system using 7 classes. Lastly, we obtain an 8% relative APM improvement after score-level fusion of both concept types, showing the effective coupling of both approaches.


Read more from SRI