Vehicle Tracking Across Nonoverlapping Cameras Using Joint Kinematic and Appearance Features

,

Citation

B. C. Matei, H. S. Sawhney and S. Samarasekera, “Vehicle tracking across nonoverlapping cameras using joint kinematic and appearance features,” CVPR 2011, 2011, pp. 3465-3472, doi: 10.1109/CVPR.2011.5995575.

Abstract

We describe a vehicle tracking algorithm using input from a network of nonoverlapping cameras. Our algorithm is based on a novel statistical formulation that uses joint kinematic and image appearance information to link local tracks of the same vehicles into global tracks with longer persistence. The algorithm can handle significant spatial separation between the cameras and is robust to challenging tracking conditions such as high traffic density, or complex road infrastructure. In these cases, traditional tracking formulations based on MHT, or JPDA algorithms, may fail to produce track associations across cameras due to the weak predictive models employed. We make several new contributions in this paper. Firstly, we model kinematic constraints between any two local tracks using road networks and transit time distributions. The transit time distributions are calculated dynamically as convolutions of normalized transit time distributions that are learned and adapted separately for individual roads. Secondly, we present a complete statistical tracker formulation, which combines kinematic and appearance likelihoods within a multi-hypothesis framework. We have extensively evaluated the algorithm proposed using a network of ground-based cameras with narrow field of view. The tracking results obtained on a large ground-truthed dataset demonstrate the effectiveness of the algorithm proposed.

Keywords: Cameras, Roads, Target tracking, Radar tracking, Kinematics, Vehicles, Signal processing algorithms


Read more from SRI