Matching Vehicles Under Large Pose Transformations Using Approximate 3D Models and Piecewise MRF Model

SRI authors: ,

Citation

Guo, Y., Rao, C., Samarasekera, S., Kim, J.; Kumar, R., & Sawhney, H.S., (June 2008). “Matching vehicles under large pose transformations using approximate 3D models and piecewise MRF model,” Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, vol., no., pp.1,8, 23-28.

Abstract

We propose a robust object recognition method based on approximate 3D models that can effectively match objects under large viewpoint changes and partial occlusion. The specific problem we solve is: given two views of an object, determine if the views are for the same or different object. Our domain of interest is vehicles, but the approach can be generalized to other man-made rigid objects. A key contribution of our approach is the use of approximate models with locally and globally constrained rendering to determine matching objects. We utilize a compact set of 3D models to provide geometry constraints and transfer appearance features for object matching across disparate viewpoints. The closest model from the set, together with its poses with respect to the data, is used to render an object both at pixel (local) level and region/part (global) level. Especially, symmetry and semantic part ownership are used to extrapolate appearance information. A piecewise Markov Random Field (MRF) model is employed to combine observations obtained from local pixel and global region level. Belief Propagation (BP) with reduced memory requirement is employed to solve the MRF model effectively. No training is required, and a realistic object image in a disparate viewpoint can be obtained from as few as just one image. Experimental results on vehicle data from multiple sensor platforms demonstrate the efficacy of our method.


Read more from SRI