Several developments in robotic and vehicle navigation in GPS-denied scenarios.
The 75 Years of Innovation series highlights the groundbreaking innovations spanning from SRIās founding in 1946 to today. Each week, SRI will release an innovation, leading up to its 75th anniversary in November 2021.
Navigating all conditions: How SRI fixed GPS-denied navigation for robots and vehicles
Has the Global Positioning System (GPS) in your car ever failed you? Did it lead you down the wrong road, only to end up in the middle of a farmyard? Sometimes, GPS systems just donāt work ā these situations are described as āGPS-denied.ā Imagine an autonomous robot or vehicle in a GPS-denied situation; navigating would be difficult, if not impossible. SRI International has been building smart robots since Shakey the Robot made history as the first mobile robot with the ability to perceive and reason about its surroundings. Ensuring that robots and vehicles maintain sight by giving them reliable signals is an ongoing area of research for SRIās Computer Vision Technologies Laboratory.
3D landmarks let robots āseeā
In 2013, SRI researchers applied a methodology called a Sliding Window Factor Graph (SWFG) to build a reliable 3D landmark map for robotic applications. An SWFG is a mathematical model that analyzes an array of sensor data across a sliding time window and estimates the position and orientation of a mobile platform or user. The data points of each window are used as a starting point for the next set of window data. By combining an SWFG with a typical robotic navigation system known as a Simultaneous Localization and Mapping (SLAM), SRI researchers significantly improved the navigation of robots and autonomous vehicles. The team devised a three-stage visual feature model that used smoothers (algorithms used to āsmoothā data to make important features and trends easier to see) to optimize the 3D landmark map and minimize the computational power needed for processing tracked features. The device comprised two components:
- A fast, smoother for short-term optimization of platform location and orientation with fixed computation costs (using an SWFG).
- A slow smoother that provided an optimal solution for a full non-linear problem of estimating platform location and orientation (using a SLAM)
The two smoothers interacted via a shared map of 3D landmarks. By combining the faster, short-term optimization of the SWFG with the slower optimization of the SLAM, SRI engineers were able to improve GPS-denied navigational accuracy while reducing the computational resources required, resulting in a fast and accurate navigation system for robotic applications. We discuss three different applications of this technology below.
Multi-sensor fusion for robot navigation
Robots rely on sensors that can include cameras, GPS, accelerometers, gyroscopes and magnetometers to navigate their environment. The navigation system known as SLAM combines the inputs from these sensors to allow an autonomous robot to map its environment and determine its location and orientation. Equipping a robot with the optimal combination of sensors for a given environment while minimizing weight and energy draw is a complex process that is crucial for robotic efficiency.
In 2014, SRI engineers developed a system that optimized the use of sensors to ensure that a robot could still navigate its environment, even in suboptimal conditions. The system is based on a plug-and-play factor graph framework and employs an incremental smoothing technique that allows it to operate in real-time by dynamically optimizing only a small part of the navigational graph. The result was an adaptive solution that used all sensors optimally, dynamically applying ideal sensor combinations; and if new sensors were added, they were optimized against other factors. This resulted in real-time navigation that made the best use of a combination of sensors for a specific situation. Further tests of the system demonstrated very high (centimeter-level) accuracy.
Aerial navigation when GPS is down
GPS can, in certain circumstances, be unreliable and inaccurate. Issues such as solar storms, mapping inaccuracies, urban area occlusion and malicious attacks can affect location accuracy. Navigation systems in aerial vehicles rely upon accurate GPS coupled with data from an inertial measurement unit (IMU). Researchers at SRI explored ways to cope with aerial navigation during a GPS outage. The result of their effort was the augmentation of aerial navigation using a monocular video camera.
SRI took a novel approach to this problem by developing a navigation system that used only measurements from an IMU and cameras to estimate precise 3D absolute pose (location and orientation) for vehicle . The navigation system used the sliding window factor graph approach to incorporating two kinds of visual measurements: 2D-3D tie-points and geo-registered feature tracks along with IMU measurements for estimation pose. 2D-3D tie-points are established by finding feature correspondences between an aerial video frame and a 2D geo-referenced image rendered from the 3D terrain database. These measurements provide global information to update the navigation system. Geo-registered feature tracks are generated by associating features across consecutive frames. They enable the propagation of 3D geo-referenced values to further improve the pose estimation. All sensor measurements are fully optimized in a smoother-based inference framework.
The system was tested by flying aircraft over entire states while demonstrating real-time optimization and improved accuracy.
Vehicle navigation in large-scale urban areas
In 2016, SRI researchers began looking at ways that autonomous cars could navigate a large-scale urban environment with a location accuracy of less than one meter. They focused on using pre-mapped visual landmarks to aid in navigation. This novel research is increasingly important as autonomous cars are becoming a reality. While GPS systems are great for journey directions, autonomous vehicles need more precision to avoid obstacles and use brakes effectively.
Typically, autonomous vehicle navigation systems rely on pre-mapped environmental data. Unfortunately, this only works effectively when there are sufficient mapped landmarks that can be continuously matched to video-captured images during driving. But, of course, urban environments are rarely static.
SRI solved the problem of sub-meter locational accuracy by creating a high-quality map of semantic visual landmarks, constructed using a monocular camera, an IMU and high-precision differential GPS. The system also made use of recent advances in SLAM and neural network-based semantic segmentation and attention-based mechanisms. It was initially tested out on the SRI campus on a cloudy winter day, on a route that included a highway and trees. The total driving distance was approximately 3.5 miles and took just under 10 minutes to complete, with an average speed of 21 mph. Using pre-mapped visual landmarks, the navigation system provided sub-meter accuracy for the entire time. Using semantics and machine learning, the system can evaluate which reference features to use and which to ignore (e.g. features on people, vehicles, which would not be there next time) when estimating pose.
SRI International has a long history of developing innovative techniques and methods using expert teams. SRI research teams continue to produce valuable insights and deliver groundbreaking, useful research to the world.
Resources
Robust Vision-Aided Navigation Using Sliding-Window Factor Graphs: https://ieeexplore.ieee.org/document/6630555
Constrained Optimal Selection for Multi-Sensor Robot Navigation Using Plug-and-Play Factor Graphs: https://ieeexplore.ieee.org/abstract/document/6906925
Precise Vision-Aided Aerial Navigation: https://www.sri.com/publication/precise-vision-aided-aerial-navigation/
Sub-Meter Vehicle Navigation Using Efficient Pre-Mapped Visual Landmarks: https://ieeexplore.ieee.org/document/7795602
Utilizing semantic visual landmarks for precise vehicle navigation: https://ieeexplore.ieee.org/document/8317859