The Robustness of an Almost-Parsing Language Model Given Errorful Training Data

Citation

Wang, W., Harper, M. P., & Stolcke, A. (2003, April). The robustness of an almost-parsing language model given errorful training data. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). (Vol. 1, pp. I-I). IEEE.

Abstract

An almost-parsing language model has been developed that provides a framework for tightly integrating multiple knowledge sources. Lexical features and syntactic constraints are integrated into a uniform linguistic structure (called a SuperARV) that is associated with words in the lexicon. The SuperARV language model has been found able to reduce perplexity and word error rate (WER) compared to trigram, part-of-speech-based, and parser-based language models on the DARPA Wall Street Journal (WSJ) CSR task. In this paper we further investigate the robustness of the language model to possibly inconsistent and flawed training data, as well as its ability to scale up to sophisticated LVCSR tasks by comparing performance on the DARPA WSJ and Hub4 (Broadcast News) CSR tasks.


Read more from SRI

  • surgeons around a surgical robot

    The SRI research behind today’s surgical robotics

    Intuitive’s da Vinci 5 system represents a major leap in robotic-assisted medicine. It all started at SRI, which continues to advance teleoperation technologies.

  • a collage of digital graphs

    A banner year for quantum

    SRI-managed QED-C’s annual report on quantum trends captures an industry accelerating rapidly from technical promise toward major global impact.

  • ICE Cube containing SRI’s aerogel experiment, photographed prior to launch. Source: Aerospace Applications North America

    An SRI carbon capture experiment launches into space

    By synthesizing carbon-absorbing aerogels in microgravity, SRI research will give us a rare glimpse into how these materials could be radically improved.