Detection of agreement and disagreement in broadcast conversations


W. Wang, S. Yaman, K. Precoda, C. Richey, and G. Raymond, “Detection of agreement and disagreement in broadcast conversations,” in Proc.The 49th Annual Meeting of the Association for Computational Linguistics:  Human Language Technologies (ACL/HLT), 2011, pp. 374–378.


We present Conditional Random Fields based approaches for detecting agreement/disagreement between speakers in English broadcast conversation shows. We develop annotation approaches for a variety of linguistic phenomena. Various lexical, structural, durational, and prosodic features are explored. We compare the performance when using features extracted from automatically generated annotations against that when using human annotations. We investigate the efficacy of adding prosodic features on top of lexical, structural, and durational features. Since the training data is highly imbalanced, we explore two sampling approaches, random downsampling and ensemble downsampling. Overall, our approach achieves 79.2 pct. (precision), 50.5 pct. (recall), 61.7 pct. (F1) for agreement detection and 69.2 pct. (precision), 46.9 pct. (recall), and 55.9 pct. (F1) for disagreement detection, on the English broadcast conversation data.

Read more from SRI