back icon
close icon

Capture phrases in quotes for more specific queries (e.g. "rocket ship" or "Fred Lynn")

Conference Paper  January 1, 2013

Image to LIDAR Matching for Geotagging in Urban Environments

Citation

COPY

Matei, B. C., Vander Valk, N., Zhiwei, Z., Hui, C., & Sawhney, H. S. (2013). Image to LIDAR matching for geotagging in urban environments. Paper presented at the IEEE Workshop on Applications of Computer Vision (WACV 2013) , 15-17 January, Piscataway, NJ.

Abstract

We present a novel method for matching ground-based query images to a georeferenced LIDAR 3D dataset acquired from an airborne platform in urban environments. We are addressing two main technical challenges: (i) different modalities between the query and the reference data (electro-optical vs. LIDAR) that impose unique challenges to the matching problem; (ii) very different viewing directions from which the query, respectively the LIDAR data were acquired. We make two main technical contributions in this paper. First, we present a method for automatically extracting features from LIDAR data that largely remain invariant to the projection in a 2D image and thus allow robust matching across modalities and change in viewpoint. Second, we describe a matching technique that finds the best 3D pose that relates the query input image to a rendered image of the 3D models. We present results of matching images to high-resolution LIDAR data covering five square kilometers over a city that demonstrate the power of the matching method proposed.

How can we help?

Once you hit send…

We’ll match your inquiry to the person who can best help you. Expect a response within 48 hours.

Our Privacy Policy