Hyper-Dimensional Analytics of Video Action at the Tactical Edge

, , ,

Citation

Michael Isnardi, Saurabh Farkya, Indu Kandaswamy, Aswin Raghavan, David Zhang, Gooitzen van der Wal, Joe Zhang, Zachary Daniels, Michael Piacentino, “Hyper-Dimensional Analytics of Video Action at the Tactical Edge”. Presented at the GOMACTech 2021 virtual conference, March 29 – April 1, 2021.

Abstract

We review HyDRATE, a low-SWaP reconfigurable neural network architecture developed under the DARPA AIE HyDDENN (Hyper-Dimensional Data Enabled Neural Network) program.  We describe the training and simulated performance of a feature extractor free of multiply-accumulates (MAC) feeding a hyperdimensional (HD) logic-based classifier and show how performance increases with the number of hyperdimensions.  Reconfigurability in the field is achieved by retraining only the feed-forward HD classifier without gradient descent backpropagation.  We show performance for a video activity classification task and demonstrate retraining on this same dataset.  Finally, we discuss a realized FPGA architecture that achieves 10x smaller memory footprint, 10x simpler operations and 100x lower latency/power compared to traditional deep neural networks.

Keywords: DARPA; HyDDENN; FPGA; hyperdimensional computing; video activity classification; training at the edge; reconfigurability


Read more from SRI