Vice President, Information and Computing Sciences Director, Center for Vision Technologies
Rakesh “Teddy” Kumar, Ph.D., is director of the Center for Vision Technologies in Information and Computing Sciences at SRI International. In this role, he is responsible for leading research and development in the fields of computer vision, robotics, image processing, computer graphics, and visualization algorithms and systems for government and commercial clients.
In 2013, Kumar was honored with the Outstanding Achievement in Technology Development award from his alma mater, University of Massachusetts Amherst, School of Computer Science. He has received the Sarnoff Presidents Award in 2009 and Sarnoff Technical Achievement awards for his work in registration of multi-sensor, multi-dimensional medical images and alignment of video to three-dimensional scene models. The paper “Stable Vision-Aided Navigation for Large-Area Augmented Reality” co-authored by him received the best paper award in the IEEE Virtual Reality 2011 conference.
Kumar has served on NSF review and DARPA ISAT panels. He has also been an associate editor for IEEE Transactions on Pattern Analysis and Machine Intelligence. He has co-authored more than 60 research publications, and received more than 50 patents. Kumar was a principal founder for multiple spin-off companies from Sarnoff Corporation, including VideoBrush, LifeClips, and SSG.
Kumar received his Ph.D. in Computer Science from the University of Massachusetts at Amherst. His M.S. in Electrical and Computer Engineering is from State University of New York at Buffalo, and his B.Tech in Electrical Engineering is from Indian Institute of Technology, Kanpur, India.
Recent publications
more +-
Night-Time GPS-Denied Navigation and Situational Understanding Using Vision-Enhanced Low-Light Imager
In this presentation, we describe and demonstrate a novel vision-enhanced low-light imager system to provide GPS-denied navigation and ML-based visual scene understanding capabilities for both day and night operations.
-
Vision based Navigation using Cross-View Geo-registration for Outdoor Augmented Reality and Navigation Applications
In this work, we present a new vision-based cross-view geo-localization solution matching camera images to a 2D satellite/ overhead reference image database. We present solutions for both coarse search for cold start and fine alignment for continuous refinement.
-
Cross-View Visual Geo-Localization for Outdoor Augmented Reality
We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database. Recently, neural network-based methods have shown state-of-the-art performance in cross-view matching.
-
Augmented Reality for Marine Fire Support Team Training
To provide FiSTs with the “sets and reps” required to develop and maintain proficiency, the Office of Naval Research 3D Warfighter Augmented Reality (3D WAR) program is developing an affordable augmented reality (AR) field simulator.
-
Optimized Simultaneous Aided Target Detection and Imagery based Navigation in GPS-Denied Environments
We describe and demonstrate a comprehensive optimized vision-based real-time solution to provide SATIN capabilities for current and future UAS in GPS-denied environments.
-
Cross-View and Cross-Modal Visual Geo-Localization for Augmented Reality and Robot/ Vehicle Navigation Applications
We will present methods and results for estimation of geo-location and/ or orientation for dismounts and platforms for wide area, outdoor augmented reality and other applications under GPS denied/ challenged conditions.