Comparing HMM, Maximum Entropy, and Conditional Random Fields for Disfluency Detection

Citation

Liu, Y., Shriberg, E., Stolcke, A., & Harper, M. P. (2005, September). Comparing HMM, maximum entropy, and conditional random fields for disfluency detection. In Interspeech (pp. 3313-3316).

Abstract

Automatic detection of disfluencies in spoken language is important for making speech recognition output more readable, and for aiding downstream language processing modules. We compare a generative hidden Markov model (HMM)-based approach and two conditional models ā€” a maximum entropy (Maxent) model and a conditional random field (CRF) ā€” for detecting disfluencies in speech. The conditional modeling approaches provide a more principled way to model correlated features. In particular, the CRF approach directly detects the reparandum regions, and thus avoids the use of ad-hoc heuristic rules. We evaluate performance of these three models across two different corpora (conversational speech and broadcast news) and for two types of transcriptions (human transcriptions and recognition output). Overall we find that that the conditional modeling approaches (Maxent and CRF) provide benefit over the HMM approach. Effects of speaking style, word recognition errors, and future directions are also discussed.


Read more from SRI