In this paper we present Hyper-Dimensional Reconfigurable Analytics at the Tactical Edge using low-SWaP embedded hardware that can perform real-time reconfiguration at the edge leveraging non-MAC deep neural nets (DNN) combined with hyperdimensional (HD) computing accelerators.
We review HyDRATE, a low-SWaP reconfigurable neural network architecture developed under the DARPA AIE HyDDENN (Hyper-Dimensional Data Enabled Neural Network) program.
The rapid progression of semiconductor technology has significantly impacted the ability to examine and analyze complex integrated circuits (ICs). Small device feature sizes, combined with large die sizes, add a heavy processing burden that severely limits our timely ability to perform defect validation and anti-tampering analysis at full scale. In this paper, we describe the algorithmic steps taken in the processing pipeline to quickly create a global image database of an entire advanced IC. We focused specifically on the image alignment and stitching algorithms necessary to support a combined field-of-view of a given layer of a die. We describe key algorithmic challenges such as contextual semantics that limits the robustness of the alignment algorithm. We also describe the use of database indexing to manage and traverse the enormous amounts of data.
SRI’s new NV-CMOS™ image sensor technology is designed to capture images over the full range of illumination from bright sunlight to overcast starlight. SRI’s low-light NV-CMOS image sensors provide the low light sensitivity previously only achievable with an analog image intensifier tube with the cost, power, ruggedness, flexibility and convenience of a digital CMOS imager chip. NV-CMOS provides multi-megapixels at video frame rates with low noise, high sensitivity across the visible and near infrared (NIR) bands, high resolution, and extended dynamic range.
This paper describes an architecture framework using heterogeneous hardware accelerators for embedded vision applications. This approach leverages the recent single-chip heterogeneous FPGAs that combine powerful multicore processors with extensive programmable gate array fabric on the same die. We present a framework using an extensive library of pipelined real time vision hardware accelerators and a service-based software architecture. This field-proven system design approach provides embedded vision developers with a powerful software abstraction layer for rapidly and efficiently integrating any of hardware accelerators for applications such as image stabilization, moving target indication, contrast normalization enhancement, and others. The framework allows the service-based software to take advantage of the hardware acceleration blocks available and perform the remainder of the processing in software. As performance requirements increase, more hardware acceleration can be added to the FPGA fabric, thus offloading the main processor.
Stereo Vision processing is a critical component of Augmented Reality systems that rely on the precise depth map of a scene to properly place computer generated objects with real life video. Important aspects of the stereo processing are the creation of a dense depth map, high boundary precision, low latency and low power. We present an embedded system for Stereo Vision Processing based on a custom GigE vision board with an Altera Stratix IV FPGA and the Acadia® II System-On-Chip that replaces an existing GPU/PC based system. By porting the stereo algorithm to an FPGA, we reduced the size and power requirements by reducing the workload of the CPU and eliminated the need of a high-end graphics card. The embedded system processes the same algorithm as the GPU/PC based system, but at 10× lower power and lower latency. Placed in a small enclosure, the overall system enables more user mobility for a more compelling user experience.
High-performance dense stereo is a critical component of computer vision applications like 3D reconstruction, robot navigation, and augmented reality. In this paper, we present a low-power, high performance FPGA implementation of a stereo algorithm suitable for embedded real-time platforms. The design is scalable for higher resolution images and frame rates and supporting different cameras and application requirements. We achieve this by designing highly parallel computation cores with very efficient memory access to the image data. Using a prototype board, we demonstrate real-time stereo processing with 640×480 pixel GigE Vision cameras at 30 frames per second. We show that this FPGA design is 10 times lower power, more scalable and has lower latency, as compared to a GPU based implementation of the same stereo algorithm.
In scenes of significantly varying lighting conditions, under and over exposed regions can suffer from a loss of information. Similarly, the presence of spatial depth within a scene can cause some image regions to be out of focus. Several methods of addressing these issues exist, including tone mapping for true high dynamic range representation and exposure fusion for combining varied-exposure low dynamic range images, as solutions to the former, and image fusion and segmentation etc. to address the latter. This paper proposes an overhauled method of exposure fusion that solves the exposure and focus problems simultaneously, achieving a well-exposed, all-in-focus result. Smart, scene-based data acquisition techniques for reducing both required input data and computational resources are discussed. A platform for a realtime system implementation is also presented.
Image fusion is an important visualization technique of integrating coherent spatial and temporal information into a compact form. Laplacian fusion is a process that combines regions of images from different sources into a single fused image based on a salience selection rule for each region. In this paper, we proposed an algorithmic approach using a mask pyramid to better localize the selection process. A mask pyramid operates in different scales of the image to improve the fused image quality beyond a global selection rule. Several examples of this mask pyramid method are provided to demonstrate its performance in a variety of applications. A new embedded system architecture that builds upon the Acadia ® II Vision Processor is proposed.