Advanced Computer Scientist, Artificial Intelligence Center
Pedro Sequeira, Ph.D., is an Advanced Computer Scientist at SRI International, Artificial Intelligence Center (AIC).
Dr. Sequeira’s research interests are mainly in the field of Machine Learning (ML) and involve the creation of autonomous, adaptive and interactive systems that learn and reason under uncertainty. He develops ML mechanisms inspired by human learning and decision-making and uses ML to better understand how humans learn and make decisions in complex tasks. He also does research on Explainable AI, developing mechanisms that help humans understand the behavior of autonomous Reinforcement Learning agents.
Dr. Sequeira is SRI’s PI on DARPA’s Artificial Social Intelligence for Successful Teams (ASIST) program and has been technical lead in other DARPA research programs. He was a Program Chair for the 2020 ACM International Conference on Intelligent Virtual Agents (IVA) and is regularly a member of the Senior Program Committee for major AI conferences such as AAAI, IJCAI and AAMAS. His research has received various awards at international, peer-reviewed venues such as Best Paper award at the International Conference on Affective Computing and Intelligent Interaction (ACII) in 2011 and the Best Paper award on HRI Interaction Design at the ACM/IEEE International Conference on Human-Robot Interaction (HRI) in 2016.
Prior to joining SRI, Dr. Sequeira was an Associate Research Scientist at Northeastern University, where he did research on automated cognitive behavior analysis and on agent-based modeling of decision-makers in critical supply chains. He received his Ph.D. in Information Systems and Computer Engineering from Instituto Superior Técnico, Universidade de Lisboa, in Portugal, from which he also received a M.Sc. and an Engineering Degree.
Dr. Sequeira’s expertise is on Machine Learning, Reinforcement Learning, genetic programming, learning and reasoning in autonomous agents and multiagent systems, cognitive modeling, affective computing, and others.
Recent publicationsmore +
Our tool provides various measures of RL agent competence stemming from interestingness analysis and is applicable to a wide range of RL algorithms, natively supporting the popular RLLib toolkit.
We approach the problem of understanding how people interact with each other in collaborative settings via Multiagent Inverse Reinforcement Learning (MIRL), where the goal is to infer the reward functions…
We present an approach for autonomous sensor control for information gathering under partially observable, dynamic and sparsely sampled environments.
Our new framework provides various measures of RL agent competence stemming from interestingness analysis and is applicable to a wide range of RL algorithms.
We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas.
Outcome-Guided Counterfactuals for Reinforcement Learning Agents from a Jointly Trained Generative Latent Space
We present a novel generative method for producing unseen and plausible counterfactual examples for reinforcement learning (RL) agents based upon outcome variables that characterize agent behavior.