Camera self-calibration: theory and experiments

Citation

Faugeras, O. D. and Luong, Q.-T. and Maybank, S. J. Camera self-calibration: theory and experiments, in European Conference on Computer Vision, Santa-Margerita, Italy, pp. 321-334, 1992.

Abstract

The problem of finding the internal orientation of a camera (camera calibration) is extremely important for practical applications. In this paper a complete method for calibrating a camera is presented. In contrast with existing methods it does not require a calibration object with a known 3D shape. The new method requires only point matches from image sequences. It is shown, using experiments with noisy data, that it is possible to calibrate a camera just by pointing it at the environment, selecting points of interest and then tracking them in the image as the camera moves. It is not necessary to know the camera motion.
The camera calibration is computed in two steps. In the first step the epipolar transformation is found. Two methods for obtaining the epipoles are discussed, one due to Sturm is based on projective invariants, the other is based on a generalisation of the essential matrix. The second step of the computation uses the so-called Kruppa equations which link the epipolar transformation to the image of the absolute conic. After the camera has made three or more movements the Kruppa equations can be solved for the coefficients of the image of the absolute conic. The solution is found using a continuation method which is briefly described. The intrinsic parameters of the camera are obtained from the equation for the image of the absolute conic.

The results of experiments with synthetic noisy data are reported and possible enhancements to the method are suggested.


Read more from SRI