Senior Technical Director, Vision and Robotics Laboratory, Center for Vision Technologies
Supun Samarasekera is technical director of the Vision and Robotics Laboratory in SRI International’s Center for Vision Technologies. In this role, he directs commercial and government projects in military training, augmented reality, robotics, aerial and ground video surveillance, 3D mapping and modeling, and medical image analysis.
Samarasekera has over 15 years of experience in building integrated multi-sensor systems for training, robotics, surveillance, security, and other applications. He has played a leading role in the design and development of navigation algorithms for unmanned aerial vehicle (UAV), robotic, and dismounted infantry platforms. The paper “Stable Vision-Aided Navigation for Large-Area Augmented Reality” co-authored by him received the best paper award in the IEEE Virtual Reality 2011 conference.
Prior to joining SRI in 1997, Samarasekera was a member of technical staff for Siemens Corporate Research where he was a key developer of 3D CT reconstruction algorithms. He also held the position of programmer analyst at the University of Pennsylvania, working on MR/CT segmentation and real-time 3D visualization. Samarasekera holds a total of 25 patents for his work.
Samarasekera holds a B.S. in Computer Science and Engineering and an M.S. in Computer Information Systems from the University of Pennsylvania.
Recent publications
more +-
C-SFDA: A Curriculum Learning Aided Self-Training Framework for Efficient Source Free Domain Adaptation
We propose C-SFDA, a curriculum learning aided self-training framework for SFDA that adapts efficiently and reliably to changes across domains based on selective pseudo-labeling. Specifically, we employ a curriculum learning scheme to promote learning from a restricted amount of pseudo labels selected based on their reliabilities.
-
Night-Time GPS-Denied Navigation and Situational Understanding Using Vision-Enhanced Low-Light Imager
In this presentation, we describe and demonstrate a novel vision-enhanced low-light imager system to provide GPS-denied navigation and ML-based visual scene understanding capabilities for both day and night operations.
-
Vision based Navigation using Cross-View Geo-registration for Outdoor Augmented Reality and Navigation Applications
In this work, we present a new vision-based cross-view geo-localization solution matching camera images to a 2D satellite/ overhead reference image database. We present solutions for both coarse search for cold start and fine alignment for continuous refinement.
-
Cross-View Visual Geo-Localization for Outdoor Augmented Reality
We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database. Recently, neural network-based methods have shown state-of-the-art performance in cross-view matching.
-
Ranging-Aided Ground Robot Navigation Using UWB Nodes at Unknown Locations
This paper describes a new ranging-aided navigation approach that does not require the locations of ranging radios.
-
Optimized Simultaneous Aided Target Detection and Imagery based Navigation in GPS-Denied Environments
We describe and demonstrate a comprehensive optimized vision-based real-time solution to provide SATIN capabilities for current and future UAS in GPS-denied environments.