This paper presents the results of initial investigation and experiments into automatic action item detection from transcripts of multi-party human-human meetings.
We investigated automatic action item detection from transcripts of multi-party meetings.
In this paper, we present a multimodal discourse ontology that serves as a knowledge representation and annotation framework for the discourse understanding component of an artificial personal office assistant.
We present a set of annotations of hierarchical topic segmentations and action item sub-dialogues collected over 65 meetings from the ICSI and ISL meeting corpora, designed to support automatic meeting understanding and analysis.
We report in this paper experiences and insights resulting from the first two years of work in two similar projects on meeting tracking and understanding. The projects are the DARPA-funded CALO project and the Swiss National research project IM2.
This paper describes current and planned research efforts towards developing multimodal discourse understanding for an automated personal office assistant.
In this paper, we present research toward ontology-based understanding of discourse in meetings and describe an ontology of multimodal discourse designed for this purpose.
This paper describes a data collection process aimed at gathering human-computer dialogs in high-stress or “busy” domains where the user is concentrating on tasks other than the conversation, for example, when driving a car.
Managing uncertainty in dialogue information state for real time understanding of multi-human meeting dialogue
Our ultimate aim is to model human-human dialogue (to the extent that it is feasible) in real-time, providing useful services (e.g. relevant document retrieval) and answering queries about the dialogue state and history (e.g. “what action items do we have so far?”).