This paper presents the results of initial investigation and experiments into automatic action item detection from transcripts of multi-party human-human meetings.
Shallow Discourse Structure for Action Item Detection
We investigated automatic action item detection from transcripts of multi-party meetings.
A Multimodal Discourse Ontology for Meeting Understanding
In this paper, we present a multimodal discourse ontology that serves as a knowledge representation and annotation framework for the discourse understanding component of an artificial personal office assistant.
Meeting Structure Annotation: Data and Tools
We present a set of annotations of hierarchical topic segmentations and action item sub-dialogues collected over 65 meetings from the ICSI and ISL meeting corpora, designed to support automatic meeting understanding and analysis.
Collaborative and argumentative models of natural discussions
We report in this paper experiences and insights resulting from the first two years of work in two similar projects on meeting tracking and understanding. The projects are the DARPA-funded CALO project and the Swiss National research project IM2.
Ontology-based multi-party meeting understanding
This paper describes current and planned research efforts towards developing multimodal discourse understanding for an automated personal office assistant.
Ontology-based discourse understanding for a persistent meeting assistant
In this paper, we present research toward ontology-based understanding of discourse in meetings and describe an ontology of multimodal discourse designed for this purpose.
A Wizard of Oz framework for collecting spoken human-computer dialogs
This paper describes a data collection process aimed at gathering human-computer dialogs in high-stress or “busy” domains where the user is concentrating on tasks other than the conversation, for example, when driving a car.
Managing uncertainty in dialogue information state for real time understanding of multi-human meeting dialogue
Our ultimate aim is to model human-human dialogue (to the extent that it is feasible) in real-time, providing useful services (e.g. relevant document retrieval) and answering queries about the dialogue state and history (e.g. “what action items do we have so far?”).