Detection of agreement and disagreement in broadcast conversations

Citation

W. Wang, S. Yaman, K. Precoda, C. Richey, and G. Raymond, “Detection of agreement and disagreement in broadcast conversations,” in Proc.The 49th Annual Meeting of the Association for Computational Linguistics:  Human Language Technologies (ACL/HLT), 2011, pp. 374–378.

Abstract

We present Conditional Random Fields based approaches for detecting agreement/disagreement between speakers in English broadcast conversation shows. We develop annotation approaches for a variety of linguistic phenomena. Various lexical, structural, durational, and prosodic features are explored. We compare the performance when using features extracted from automatically generated annotations against that when using human annotations. We investigate the efficacy of adding prosodic features on top of lexical, structural, and durational features. Since the training data is highly imbalanced, we explore two sampling approaches, random downsampling and ensemble downsampling. Overall, our approach achieves 79.2 pct. (precision), 50.5 pct. (recall), 61.7 pct. (F1) for agreement detection and 69.2 pct. (precision), 46.9 pct. (recall), and 55.9 pct. (F1) for disagreement detection, on the English broadcast conversation data.


Read more from SRI

  • An arid, rural Nevada landscape

    Can AI help us find valuable minerals?

    SRI’s machine learning-based geospatial analytics platform, already adopted by the USGS, is poised to make waves in the mining industry.

  • Two students in a computer lab

    Building a lab-to-market pipeline for education

    The SRI-led LEARN Network demonstrates how we can get the best evidence-based educational programs to classrooms and students.

  • Code reflected in a man's eyeglasses

    LLM risks from A to Z

    A new paper from SRI and Brazil’s Instituto Eldorado delivers a comprehensive update on the security risks to large language models.