Advice is a powerful tool for learning. But advice also presents the challenge of bridging the gap between the high-level representations that easily capture human advice and the low-level representations that systems must operate with using that advice. Drawing inspiration from studies on human motor skills and memory systems, we present an approach that converts human advice into synthetic or imagined training experiences, serving to scaffold the low-level representations of simple, reactive learning systems such as reinforcement learners. Research on using mental imagery and directed attention in motor and perceptual skills motivates our approach. We introduce the concept of a cognitive advice template for generating scripted, synthetic experiences and use saliency masking to further conceal irrelevant portions of training observations. We present experimental results for a deep reinforcement learning agent in a Minecraft-based game environment that show how such synthetic experiences improve performance, enabling the agent to achieve faster learning and higher rates of success.
We propose a framework toward more explainable reinforcement learning (RL) agents. The framework uses introspective analysis of an agent’s history of interaction with its environment to extract several interestingness elements regarding its behavior. Introspection operates at three distinct levels, first analyzing characteristics of the task that the agent has to solve, then the behavior of the agent while interacting with the environment, and finally by performing a meta-analysis combining information gathered at the lower levels. The analyses rely on data that is already collected by standard RL algorithms. We propose that additional statistical data can easily be collected by a RL agent while learning that helps extract more meaningful aspects. We provide insights on how an explanation framework can leverage the elements generated through introspection. Namely, they can help convey learned strategies to a human user, justify the agent’s decisions in relevant situations, denote its learned preferences and goals, and identify circumstances in which advice from the user might be needed.
Most explanation schemes are reactive and informational: explanations are provided in response to specific user queries and focus on making the system’s reasoning more transparent. In mixed autonomy settings that involve teams of humans and autonomous agents, proactive explanation that anticipates and preempts potential surprises can be particularly valuable. By providing timely, succinct, and context-sensitive explanations, autonomous agents can avoid perceived faulty behavior and the consequent erosion of trust, enabling more fluid collaboration. We present an explanation framework based on the notion of explanation drivers —i.e., the intent or purpose behind agent explanations. We focus on explanations meant to reconcile expectation violations and enumerate a set of triggers for proactive explanation. Most work on explainable AI focuses on intelligibility; investigating explanation in mixed autonomy settings helps illuminate other important explainability issues such as purpose, timing, and impact.
Asking questions is an integral part of learning. Asking questions can clarify concepts, test hypotheses, add missing information, or provide additional knowledge to facilitate learning. The last item motivates the work described in this paper. By asking questions, our system obtains information that serves as background knowledge for a base learner, enabling the base learner to make useful generalizations even with few training examples. In previous work, we developed static strategies for question asking. Here, we extend that work with a learning approach for acquiring question-asking strategies that better accommodates the interdependent nature of questions. We present experiments validating the approach and showing its usefulness for acquiring efficient, context-dependent question-asking strategies. Keywords: Artificial Intelligence, Artificial Intelligence Center, AIC, question-asking, learning, decision trees
Keywords: Artificial Intelligence, Artificial Intelligence Center, AIC