In this paper we present Hyper-Dimensional Reconfigurable Analytics at the Tactical Edge using low-SWaP embedded hardware that can perform real-time reconfiguration at the edge leveraging non-MAC deep neural nets (DNN) combined with hyperdimensional (HD) computing accelerators.
Computational sensing-low-power processing publications
We review HyDRATE, a low-SWaP reconfigurable neural network architecture developed under the DARPA AIE HyDDENN (Hyper-Dimensional Data Enabled Neural Network) program.
Quantization for deep neural networks have afforded models for edge devices that use less on-board memory and enable efficient low-power inference. In this paper, we present a comparison of model-parameter driven quantization approaches that can achieve as low as 3-bit precision without affecting accuracy. The post-training quantization approaches are data-free, and the resulting weight values are closely tied to the dataset distribution on which the model has converged to optimality. We show quantization results for a number of state-of-art deep neural networks (DNN) using large dataset like ImageNet. To better analyze quantization results, we describe the overall range and local sparsity of values afforded through various quantization schemes. We show the methods to lower bit-precision beyond quantization limits with object class clustering.
The rapid progression of semiconductor technology has significantly impacted the ability to examine and analyze complex integrated circuits (ICs). Small device feature sizes, combined with large die sizes, add a heavy processing burden that severely limits our timely ability to perform defect validation and anti-tampering analysis at full scale. In this paper, we describe the algorithmic steps taken in the processing pipeline to quickly create a global image database of an entire advanced IC. We focused specifically on the image alignment and stitching algorithms necessary to support a combined field-of-view of a given layer of a die. We describe key algorithmic challenges such as contextual semantics that limits the robustness of the alignment algorithm. We also describe the use of database indexing to manage and traverse the enormous amounts of data.
We present a novel optimization strategy for training neural networks which we call “BitNet”. Our key idea is to limit the expressive power of the network by dynamically controlling the range and set of values that the parameters can take.
Distributed smart cameras exploit smartphone processor performance in their node communication and video metadata exchange, allowing the network to collectively reason in interpreting the scene, generating alerts, and making decisions.
This paper describes an architecture framework using heterogeneous hardware accelerators for embedded vision applications. This approach leverages the recent single-chip heterogeneous FPGAs that combine powerful multicore processors with extensive programmable gate array fabric on the same die. We present a framework using an extensive library of pipelined real time vision hardware accelerators and a service-based software architecture. This field-proven system design approach provides embedded vision developers with a powerful software abstraction layer for rapidly and efficiently integrating any of hardware accelerators for applications such as image stabilization, moving target indication, contrast normalization enhancement, and others. The framework allows the service-based software to take advantage of the hardware acceleration blocks available and perform the remainder of the processing in software. As performance requirements increase, more hardware acceleration can be added to the FPGA fabric, thus offloading the main processor.
Motion Adaptive Signal Integration-High Dynamic Range (MASI-HDR) Video Processing for Dynamic Platforms
Abstract Two of the biggest challenges in designing U×V vision systems are properly representing high dynamic range scene content using low dynamic range components and reducing camera motion blur. SRI’s MASI-HDR (Motion Adaptive Signal Integration-High Dynamic Range) is a novel technique for generating blur-reduced video using multiple captures for each displayed frame while increasing the […]
Full motion video (FMV) in unreliable, low-bit rate network channels suffers from quality issues such as jitter and block artifacts. In this paper, we introduce Vision Guided Compression (VGC), as a pre-processing technology that can be coupled with standards-based video coding, to provide FMV at low-bit rates. VGC utilizes computer vision algorithms to track salient features and keep them sharp, while non-salient features are lowpass filtered. With this approach, VGC provides an additional spatial parameter to gracefully tune the QoS, while providing FMV and preserving salient visual information.