Multi-Modal Sensor Fusion Algorithm for Ubiquitous Infrastructure-Free Localization in Vision-Impaired Environments

, ,

Citation

Oskiper, T., Chiu, H., Zhu, Z., Samarasekera, S., & Kumar, R., (October 2010). “Multi-modal sensor fusion algorithm for ubiquitous infrastructure-free localization in vision-impaired environments,” Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, vol., no., pp.1513,1519, 18-22

Abstract

In this paper, we present a unified approach for a camera tracking system based on an error-state Kalman filter algorithm. The filter uses relative (local) measurements obtained from image based motion estimation through visual odometry, as well as global measurements produced by landmark matching through a pre-built visual landmark database and range measurements obtained from radio frequency (RF) ranging radios. We show our results by using the camera poses output by our system to render views from a 3D graphical model built upon the same coordinate frame as the landmark database which also forms the global coordinate system and compare them to the actual video images. These results help demonstrate both the long term stability and the overall accuracy of our algorithm as intended to provide a solution to the GPS denied ubiquitous camera tracking problem under both vision-aided and vision-impaired conditions.

Keywords: Cameras, Visualization, Mathematical model, Kalman filters, Three dimensional displays, Robot kinematics


Read more from SRI