We introduce Deep Adaptive Semantic Logic (DASL), a novel framework for automating the generation of deep neural networks that incorporates user-provided formal knowledge to improve learning from data.
Computer vision publications
In this paper, we present a comparison of model-parameter driven quantization approaches that can achieve as low as 3-bit precision without affecting accuracy.
We present an approach that combines appearance and semantic information for 2D image-based localization (2D-VL) across large perceptual changes and time lags.
This paper analyzes the feasibility of utilizing multiple low-cost on-board sensors for ground robots or drones navigating in visually-degraded environments.
We propose novel Stacked Spatio-Temporal Graph Convolutional Networks (Stacked-STGCN) for action segmentation, i.e., predicting and localizing a sequence of actions over long videos.
This paper presents an approach to creating runtime-throttleable NNs that can adaptively balance performance and resource use in response to a control signal.
Lucid Explanations Help: Using a Human-AI Image-Guessing Game to Evaluate Machine Explanation Helpfulness
We propose a Twenty-Questions style collaborative image retrieval game as a method of evaluating the efficacy of explanations (visual evidence or textual justification) in the context of Visual Question Answering.
In this paper, we describe the algorithmic steps taken in the processing pipeline to quickly create a global image database of an entire advanced IC.
In this work, we address this limitation by revisiting a particular family of spectral graph networks, Chebyshev GCNs, showing its efficacy in solving graph classification tasks with a variable graph structure and size.