In this paper we present Hyper-Dimensional Reconfigurable Analytics at the Tactical Edge using low-SWaP embedded hardware that can perform real-time reconfiguration at the edge leveraging non-MAC deep neural nets (DNN) combined with hyperdimensional (HD) computing accelerators.
We review HyDRATE, a low-SWaP reconfigurable neural network architecture developed under the DARPA AIE HyDDENN (Hyper-Dimensional Data Enabled Neural Network) program.
SRI’s new NV-CMOS™ image sensor technology is designed to capture images over the full range of illumination from bright sunlight to overcast starlight. SRI’s low-light NV-CMOS image sensors provide the low light sensitivity previously only achievable with an analog image intensifier tube with the cost, power, ruggedness, flexibility and convenience of a digital CMOS imager chip. NV-CMOS provides multi-megapixels at video frame rates with low noise, high sensitivity across the visible and near infrared (NIR) bands, high resolution, and extended dynamic range.
This paper describes an architecture framework using heterogeneous hardware accelerators for embedded vision applications. This approach leverages the recent single-chip heterogeneous FPGAs that combine powerful multicore processors with extensive programmable gate array fabric on the same die. We present a framework using an extensive library of pipelined real time vision hardware accelerators and a service-based software architecture. This field-proven system design approach provides embedded vision developers with a powerful software abstraction layer for rapidly and efficiently integrating any of hardware accelerators for applications such as image stabilization, moving target indication, contrast normalization enhancement, and others. The framework allows the service-based software to take advantage of the hardware acceleration blocks available and perform the remainder of the processing in software. As performance requirements increase, more hardware acceleration can be added to the FPGA fabric, thus offloading the main processor.
Motion Adaptive Signal Integration-High Dynamic Range (MASI-HDR) Video Processing for Dynamic Platforms
Abstract Two of the biggest challenges in designing U×V vision systems are properly representing high dynamic range scene content using low dynamic range components and reducing camera motion blur. SRI’s MASI-HDR (Motion Adaptive Signal Integration-High Dynamic Range) is a novel technique for generating blur-reduced video using multiple captures for each displayed frame while increasing the […]
We propose a Motion Adaptive Signal Integration (MASI) algorithm that operates the sensor at a high frame rate, with real time alignment of individual image frames to form an enhanced quality video output.
Overcoming Adverse Weather Conditions with a Common Optical Path, Multiple Sensors, and Intelligent Image Fusion System
This paper introduces an engineering prototype to address the image collection challenges using a Common Optical Path, Multiple Sensors and an Intelligent Image Fusion System, and provides illustrations and sample fusion images.
This paper describes a family of reconfigurable processing elements (RPEs) used to support video processing for the Sarnoff Vision Front End 200 (VFE-200) vision system.