Visual Odometry

Citation

Nister, D., Naroditsky, O., Bergen, J., (June-July 2004) “Visual odometry,” Computer Vision and Pattern Recognition, 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol.1, no., pp.I-652,I-659 Vol.1, 27.

Abstract

We present a system that estimates the motion of a stereo head or a single moving camera based on video input. The system operates in real-time with low delay and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates what we call visual odometry, i.e. motion estimates from visual input alone. No prior knowledge of the scene nor the motion is necessary. The visual odometry can also be used in conjunction with information from other sources such as GPS, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive and handheld platforms. We focus on results with an autonomous ground vehicle. We give examples of camera trajectories estimated purely from images over previously unseen distances and periods of time.


Read more from SRI