back icon
close icon

Capture phrases in quotes for more specific queries (e.g. "rocket ship" or "Fred Lynn")

Conference Paper  September 2, 2019

Multi-Sensor Fusion for Motion Estimation in Visually-Degraded Environments

SRI Authors Han-Pang Chiu, Supun Samarasekera

Citation

COPY

Mikhail Sizintsev, Abhinav Rajvanshi, Han-Pang Chiu, Kevin Kaighn, Supun Samarasekera, David P. Snyder: Multi-Sensor Fusion for Motion Estimation in Visually-Degraded Environments. SSRR 2019: 7-14

Abstract

Accurate motion estimation using low-cost sensors for autonomous robots in visually-degraded environments is critical to applications such as infrastructure inspection and indoor rescue missions. This paper analyzes the feasibility of utilizing multiple low-cost on-board sensors for ground robots or drones navigating in visually-degraded environments. We select four low-cost and small-size sensors for evaluation: IMU, EO stereo cameras with LED lights, active IR cameras, and 2D LiDAR. We adapt and extend state-of-the-art multi-sensor motion estimation techniques, including a factor graph framework for sensor fusion, under poor illumination conditions. We evaluate different sensor combinations using the factor graph framework, and benchmark each combination with its accuracy for two representative datasets acquired in totally dark environments. Our results show the potential of this sensor fusion approach towards an improved ego-motion solution in challenging dark environments.

How can we help?

    Once you hit send…

    We’ll match your inquiry to the person who can best help you. Expect a response within 48 hours.

    Our Privacy Policy