Technical Director, Center for Vision Technologies, Vision and Robotics Laboratory
Han-Pang Chiu, Ph.D., Technical Director of the Scene Understanding and Navigation (SUN) Group at SRI International, leads government and commercial projects to develop innovative solutions for real-world applications to multi-sensor navigation, multi-modal target detection/classification, robotic autonomy, and mobile augmented reality.
Han-Pang has been chief scientist and technical lead in many DARPA, ONR, and US Army research programs. He has been a pioneer in factor graph formulation for multi-sensor navigation. The paper “Stable Vision-Aided Navigation for Large-Area Augmented Reality” co-authored and presented by him received the best paper award in the IEEE Virtual Reality 2011 conference. His work also supports a few spin-off companies from SRI. Recently, he has also been leading SRI’s efforts in semantic navigation, which develops new deep learning techniques to derive and utilize high-level semantic scene information for robotic autonomy.
Prior to joining SRI, Han-Pang was a postdoctoral researcher in Computer Science and Artificial Intelligence Laboratory (CSAIL) at Massachusetts Institute of Technology (MIT), where he worked on a DARPA-funded project to develop computer vision techniques for robot grasping. He received his Ph.D. in Computer Science from MIT in 2009, and holds an MBA degree in management information systems from National Taiwan University.
Recent publications
more +-
C-SFDA: A Curriculum Learning Aided Self-Training Framework for Efficient Source Free Domain Adaptation
We propose C-SFDA, a curriculum learning aided self-training framework for SFDA that adapts efficiently and reliably to changes across domains based on selective pseudo-labeling. Specifically, we employ a curriculum learning scheme to promote learning from a restricted amount of pseudo labels selected based on their reliabilities.
-
Night-Time GPS-Denied Navigation and Situational Understanding Using Vision-Enhanced Low-Light Imager
In this presentation, we describe and demonstrate a novel vision-enhanced low-light imager system to provide GPS-denied navigation and ML-based visual scene understanding capabilities for both day and night operations.
-
Vision based Navigation using Cross-View Geo-registration for Outdoor Augmented Reality and Navigation Applications
In this work, we present a new vision-based cross-view geo-localization solution matching camera images to a 2D satellite/ overhead reference image database. We present solutions for both coarse search for cold start and fine alignment for continuous refinement.
-
Cross-View Visual Geo-Localization for Outdoor Augmented Reality
We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database. Recently, neural network-based methods have shown state-of-the-art performance in cross-view matching.
-
Autonomous Docking Using Learning-Based Scene Segmentation in Underground Mine Environments
This paper describes a vision-based autonomous docking solution that moves a coalmine shuttle car to the continuous miner in GPS-denied underground environments.
-
Ranging-Aided Ground Robot Navigation Using UWB Nodes at Unknown Locations
This paper describes a new ranging-aided navigation approach that does not require the locations of ranging radios.