In this paper, we show that leveraging NAS for incremental learning results in strong performance gains for classification tasks.
Machine learning publications
Saccade Mechanisms for Image Classification, Object Detection and Tracking
Abstract We examine how the saccade mechanism from biological vision can be used to make deep neural networks more efficient for classification and object detection problems. Our proposed approach is based on the ideas of attention-driven visual processing and saccades, miniature eye movements influenced by attention. We conduct experiments by analyzing: i) the robustness of […]
Conformal Prediction Intervals for Markov Decision Process Trajectories
This paper extends previous work on conformal prediction for functional data and conformalized quantile regression…
Broadening AI Ethics Narratives: An Indic Arts View
In this paper, we investigate uncovering the unique socio-cultural perspectives embedded in human-made art, which in turn, can be valuable in expanding the horizon of AI ethics.
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models
Abstract Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model […]
Challenges in Procedural Multimodal Machine Comprehension: A Novel Way to Benchmark
We identify three critical biases stemming from the question-answer generation process and memorization capabilities of large deep models.
“How to best say it?” : Translating Directives in Machine Language into Natural Language in the Blocks World
We propose a method to generate optimal natural language for block placement directives generated by a machine’s planner during human-agent interactions in the blocks world.
Comprehension Based Question Answering Using Bloom’s Taxonomy
Abstract Current pre-trained language models have lots of knowledge, but a more limited ability to use that knowledge. Bloom’s Taxonomy helps educators teach children how to use knowledge by categorizing comprehension skills, so we use it to analyze and improve the comprehension skills of large pre-trained language models. Our experiments focus on zero-shot question answering, […]
Modular Adaptation for Cross-Domain Few-Shot Learning
Abstract Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples. While literature has demonstrated great successes via representation learning, in this work, we show that substantial performance improvement of downstream tasks can also be achieved by appropriate designs of the adaptation process. Specifically, we propose a modular adaptation […]