We describe the expectations architects have for design proposals from conceptual sketches, and identify corresponding automated metrics from the literature.
Outcome-Guided Counterfactuals for Reinforcement Learning Agents from a Jointly Trained Generative Latent Space
We present a novel generative method for producing unseen and plausible counterfactual examples for reinforcement learning (RL) agents based upon outcome variables that characterize agent behavior.
We present an approach that converts human advice into synthetic or imagined training experiences, serving to scaffold the low-level representations of simple, reactive learning systems such as reinforcement learners.
The framework uses introspective analysis of an agent’s history of interaction with its environment to extract several interestingness elements regarding its behavior.
We present an explanation framework based on the notion of explanation drivers —i.e., the intent or purpose behind agent explanations. We focus on explanations meant to reconcile expectation violations and enumerate a set of triggers for proactive explanation.
We describe a method for identifying and performing functional analysis of structured regions that are embedded in natural language documents, such as tables or key-value lists.
The spontaneous speech obtained during structured interviews of 26 depressed and nondepressed older men, an as-yet little studied population, was analyzed.
Language Analytics for Assessing Brain Health: Cognitive Impairment, Depression and Pre-Symptomatic Alzheimers Disease
We present data demonstrating how brain health may be assessed by applying data-mining and text analytics to patient language.
Asking questions can clarify concepts, test hypotheses, add missing information, or provide additional knowledge to facilitate learning. The last item motivates the work described in this paper.