The Use of a Linguistically Motivated Language Model in Conversational Speech Recognition

Citation

Wang, W., Stolcke, A., & Harper, M. P. (2004, May). The use of a linguistically motivated language model in conversational speech recognition. In 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing (Vol. 1, pp. I-261). IEEE.

Abstract

Structured language models have recently been shown to give significant improvements in large-vocabulary recognition relative to traditional word N-gram models, but typically imply a heavy computational burden and have not been applied to large training sets or complex recognition systems. In previous work, we developed a linguistically motivated and computationally efficient almost-parsing language model using a data structure derived from Constraint Dependency Grammar parses that tightly integrates knowledge of words, lexical features, and syntactic constraints. In this paper we show that such a model can be used effectively and efficiently in all stages of a complex, multi-pass conversational telephone speech recognition system. Compared to a state-of-the-art 4-gram interpolated word- and class-based language model, we obtained a 6.2 pct. relative word error reduction (a 1.6 pct. absolute reduction) on a recent NIST evaluation set.


Read more from SRI