Abstract We approach the problem of understanding how people interact with each other in collaborative settings, especially when individuals know little about their teammates, via Multiagent Inverse Reinforcement Learning (MIRL), where the goal is to infer the reward functions guiding the behavior of each individual given trajectories of a team’s behavior during some task. Unlike […]
Sensor Control for Information Gain in Dynamic, Sparse and Partially Observed Environments
We present an approach for autonomous sensor control for information gathering under partially observable, dynamic and sparsely sampled environments.
Global and Local Analysis of Interestingness for Competency-Aware Deep Reinforcement Learning
Our new framework provides various measures of RL agent competence stemming from interestingness analysis and is applicable to a wide range of RL algorithms.
A Framework for understanding and Visualizing Strategies of RL Agents
We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas.
Outcome-Guided Counterfactuals for Reinforcement Learning Agents from a Jointly Trained Generative Latent Space
We present a novel generative method for producing unseen and plausible counterfactual examples for reinforcement learning (RL) agents based upon outcome variables that characterize agent behavior.
Interestingness Elements for Explainable Reinforcement Learning: Understanding Agents’ Capabilities and Limitations
We propose an explainable reinforcement learning (XRL) framework that analyzes an agent’s history of interaction with the environment to extract interestingness elements that explain its behavior.
Interestingness Elements for Explainable Reinforcement Learning through Introspection
The framework uses introspective analysis of an agent’s history of interaction with its environment to extract several interestingness elements regarding its behavior.