Unbiased discourse segmentation evaluation

Citation

Niekrasz John, Moore Johanna D. Unbiased discourse segmentation evaluation, in 2010 IEEE Spoken Language Technology Workshop, pp. 43-48, Dec 2010.

Abstract

In this paper, we show that the performance measures Pk and Window Diff, commonly used for discourse, topic, and story segmentation evaluation, are biased in favor of segmentations with fewer or adjacent segment boundaries. By analytical and empirical means, we show how this results in a failure to penalize substantially defective segmentations. Our novel unbiased measure k-κ corrects this, providing a single score that accounts for chance agreement. We also propose additional statistics that may be used to characterize important properties of segmentations such as boundary clumping. We go on to replicate a recent spoken-language topic segmentation experiment, drawing conclusions that are substantially different from previous studies concerning the effectiveness of state-of-the-art topic segmentation algorithms.

Keywords: Histograms, Equations, Proposals, Indexes, Length measurement, Image edge detection, Mathematical model


Read more from SRI

  • Banner and attendees at the IEEE Hard Tech Venture Summit

    Cultivating hard tech startups that scale

    IEEE’s Hard Tech Venture Summit convened innovators at SRI to refine strategies and build new networks.

  • Patient going into a MRI

    Bringing surgical tools inside the MRI

    Drawing on SRI’s unique innovation ecosystem, the startup Medical Devices Corner is seeking to improve cancer surgery by advancing MRI-safe teleoperation.

  • Christopher Mims and Susan Patrick

    PARC Forum: How to AI

    The Wall Street Journal tech columnist Christopher Mims and SRI Education’s Susan Patrick discuss how AI can strengthen human agency.