We propose a method to train an autonomous agent to learn to accumulate a 3D scene graph representation of its environment by simultaneously learning to navigate through said environment.
Collaborative human robot autonomy publications
SASRA: Semantically-aware Spatio-temporal Reasoning Agent for Vision-and-Language Navigation in Continuous Environments
This paper presents a novel approach for the Vision-and-Language Navigation (VLN) task in continuous 3D environments.
MaAST: Map Attention with Semantic Transformers for Efficient Visual Navigation
By using our novel attention schema and auxiliary rewards to better utilize scene semantics, we outperform multiple baselines trained with only raw inputs or implicit semantic information while operating with an 80% decrease in the agent’s experience.
Constrained Optimal Selection for Multi-Sensor Robot Navigation Using Plug-and-Play Factor Graphs
This paper proposes a real-time navigation approach that is able to integrate many sensor types while fulfilling performance needs and system constraints.
Robust Visual Path Following for Heterogeneous Mobile Platforms
We present an innovative path following system based upon multi-camera visual odometry and visual landmark matching.