back icon
close icon

Capture phrases in quotes for more specific queries (e.g. "rocket ship" or "Fred Lynn")

  March 1, 2018

Explanation to Avert Surprise

SRI Authors Melinda Gervasio, Karen Myers, Eric Yeh

Citation

COPY

Gervasio, M., Myers, K., Yeh, E., and Adkins, B. (2018). Explanation to avert surprise. Joint Proceedings of the ACM IUI Workshops, Vol. 2068.

 

Abstract

Most explanation schemes are reactive and informational: explanations are provided in response to specific user queries and focus on making the system’s reasoning more transparent. In mixed autonomy settings that involve teams of humans and autonomous agents, proactive explanation that anticipates and preempts potential surprises can be particularly valuable. By providing timely, succinct, and context-sensitive explanations, autonomous agents can avoid perceived faulty behavior and the consequent erosion of trust, enabling more fluid collaboration. We present an explanation framework based on the notion of explanation drivers—i.e., the intent or purpose behind agent explanations. We focus on explanations meant to reconcile expectation violations and enumerate a set of triggers for proactive explanation. Most work on explainable AI focuses on intelligibility; investigating explanation in mixed autonomy settings helps illuminate other important explainability issues such as purpose, timing, and impact.

How can we help?

Once you hit send…

We’ll match your inquiry to the person who can best help you. Expect a response within 48 hours.

Our Privacy Policy