• Skip to primary navigation
  • Skip to main content
SRI logo
  • About
    • Press room
    • Our history
  • Expertise
    • Advanced imaging systems
    • Artificial intelligence
    • Biomedical R&D services
    • Biomedical sciences
    • Computer vision
    • Cyber & formal methods
    • Education and learning
    • Innovation strategy and policy
    • National security
    • Ocean & space
    • Quantum
    • Robotics, sensors & devices
    • Speech & natural language
    • Video test & measurement
  • Ventures
  • NSIC
  • Careers
  • Contact
  • 日本支社
Search
Close
Machine learning publications March 27, 2023

Night-Time GPS-Denied Navigation and Situational Understanding Using Vision-Enhanced Low-Light Imager

Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar

Citation

Copy to clipboard


Han-Pang Chiu, Supun Samarasekera, Niluthpol Mithun, Abhinav Rajvanshi, Kevin Kaighn, Glenn Murray, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Night-Time GPS-Denied Navigation and Situational Understanding Using Vision-Enhanced Low-Light Imager, 2023 Joint Navigation Conference, Institute of Navigation

Abstract

Accurate navigation and situational understanding in GPS-denied environments are key capabilities for military platforms and warfighters. Vision-aided navigation systems with machine learning (ML) techniques using small low-cost electro-optical (EO) cameras provide these capabilities. In addition to estimating platform locations, these systems can analyze the perceived scene and identify targets of interest in real time during navigation, by utilizing a pre-trained deep neural network. However, all these capabilities degrade dramatically in night-time operations and dark environments such as tunnels and mines, that are common places for military missions. Vision-based navigation methods are unreliable in these perceptually challenging cases. The quality of image-based machine learning techniques is also poor in visually degraded situations. 

In this presentation, we describe and demonstrate a novel vision-enhanced low-light imager system to provide GPS-denied navigation and ML-based visual scene understanding capabilities for both day and night operations. Our system uses SRI’s DomiNite imager, which is an advanced low SWAP (size, weight, and power) low-light sensor to provide both day and night imaging capability without the need for expensive and bulky image intensifiers or infrared imagers. The DomiNite imager based on the fourth generation of SRI’s NV-CMOS® technology, is the first digital night vision imager that sees into the shadows during the day and through the darkness of the night.  Our system adapts and extends SRI’s state-of-the-art vision-aided navigation methods and machine learning techniques to work with the DomiNite imager. It enables enhanced augmented reality (AR) features for aided target recognition, and situational awareness for mobile platforms or warfighters in all conditions (day and night) without use of any external illumination.   

Unlike conventional global-shutter EO cameras, the low-light imager is rolling shutter – each image line is captured at different time. To adapt our vision-based navigation methods to low-light cameras, we perform real-time motion compensation across image rows using high-rate motion measurements from a small low-cost IMU (inertial measurement unit). The 6-DoF (Degree of Freedom) navigation pose (3D position and 3D orientation) can then be estimated by tracking and fusing visual features across video frames with IMU-based motion mechanism. Our system uses an Error-state Kalman filter to integrate measurements from sensors to produce a 6-DoF platform pose at 30 Hz. High-precision GPS-denied navigation accuracy (<1 meter after >1km navigation) is achieved using our system in night-time dark environments.   

For vision-based navigation using semantic cues and for situational understanding in darkness, we develop a novel unsupervised transfer learning framework to adapt existing deep neural networks from EO cameras to low-light images. Traditional supervised learning approach requires lots of data collection and manual labeling to train a deep neural network from scratch for low-light sensors, which is expensive and time consuming. Our unsupervised framework avoids the human labeling effort, by using existing EO-camera based deep networks (teacher) to supervise the training of a new deep neural network (student) for low-light sensor within a teacher-student self-training architecture. The trained deep neural network from our framework outperforms state-of-the-art ML methods in semantic segmentation by +10.5% accuracy. Dynamic objects such as people and vehicles detected by our semantic segmentation network can be cued for situation awareness. The accuracy of our estimated platform pose is also improved, by filtering out pixels associated with these detected dynamic targets from the pose estimation process.  The segmented static regions such as roads and buildings also can be used for semantic geo-registration, that generates absolute pose measurements by matching the segmented regions from a perceived camera frame to geo-reference imagery database.   Our entire system is integrated into a low SWAP hand-held hardware unit (< 8.1 x 7.0 x 3.2 in., < 1.7 kg / 3.75 lbs. < 15W) with DomiNite imager that can be used by warfighters or small military platforms. In this presentation, we describe the details of the hardware and our methods for each system module. We show experimental results achieved by our system operating at night under starlight conditions with no external illumination. The results include 6-DoF GPS denied navigation pose estimation and real-time semantic scene analysis, with AR applications.

↓ Review online

Share this

How can we help?

Once you hit send…

We’ll match your inquiry to the person who can best help you.

Expect a response within 48 hours.

Career call to action image

Make your own mark.

Search jobs

Our work

Case studies

Publications

Timeline of innovation

Areas of expertise

Institute

Leadership

Press room

Media inquiries

Compliance

Careers

Job listings

Contact

SRI Ventures

Our locations

Headquarters

333 Ravenswood Ave
Menlo Park, CA 94025 USA

+1 (650) 859-2000

Subscribe to our newsletter


日本支社
SRI International
  • Contact us
  • Privacy Policy
  • Cookies
  • DMCA
  • Copyright © 2023 SRI International
Manage Cookie Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage vendors Read more about these purposes
View preferences
{title} {title} {title}