Ontology-based multi-party meeting understanding

SRI author:

Citation

Purver Matthew, Niekrasz John, Peters Stanley. Ontology-based multi-party meeting understanding, in Proceedings of the 2005 CHI Workshop on The Virtuality Continuum Revisited, Apr 2005.

Introduction

This paper describes current and planned research efforts towards developing multimodal discourse understanding for an automated personal office assistant. The research is undertaken as part of a project called The Cognitive Agent that Learns and Organizes (CALO) (see http://www.ai.sri.com/project/CALO). The CALO assistant is intended to aid users both personally and as a group in performing office-related tasks such as coordinating schedules, providing relevant information for completing tasks, making a record of meetings, and assisting in fulfilling decisions.

Our focus within this enterprise is on understanding, describing, and automatically participating in multimodal human-human and human-computer discourse amongst CALO users and the system itself. This aspect is functionally realized by the system in its role as a persistent presence before, during, and after meetings; firstly by helping to set up and coordinate meetings and meeting agendas; secondly by extracting detailed information about what was discussed, what the participants’ actions were, and what decisions were reached; thirdly by interactively reporting on this extracted information; and eventually by interactively providing relevant and useful information or responding to queries during the meeting itself.


Read more from SRI

  • An arid, rural Nevada landscape

    Can AI help us find valuable minerals?

    SRI’s machine learning-based geospatial analytics platform, already adopted by the USGS, is poised to make waves in the mining industry.

  • Two students in a computer lab

    Building a lab-to-market pipeline for education

    The SRI-led LEARN Network demonstrates how we can get the best evidence-based educational programs to classrooms and students.

  • Code reflected in a man's eyeglasses

    LLM risks from A to Z

    A new paper from SRI and Brazil’s Instituto Eldorado delivers a comprehensive update on the security risks to large language models.