De-Correlating CNN Features for Generative Classification

Citation

Desai, C., Eledath, J., Sawhney, H., & Bansal, M. (2015, 6-9 January). De-correlating CNN features for generative classification. Paper presented at the IEEE Winter Conference on Applications of Computer Vision (WACV’15) Waikoloa Beach, HI.

Abstract

The problem of training a classifier from a handful of positive examples, without having to supply class specific negatives is of great practical importance. The proposed approach to solving this problem builds on the idea of training LDA classifiers using only class specific foreground images and a large collection of unlabelled images, as described in [11]. While we adopt the LDA training methodology of [11], we depart from HOG features and work with those extracted from a Convolutional Neural Network (CNN) pre-trained on Image Net (Over feat). We combine Over feat features with the LDA training methodology to derive generative classifiers. When evaluated on a K-way classification problem, these classifiers are almost as good as those trained discriminatively using the same features. Unlike the HOG based approach of [11], our classifiers do not need any post-processing step of calibration, a step that requires positives and negatives. Finally, we show that in an instance retrieval setup, we can employ these generative classifiers to derive a novel query-expansion framework that achieves a significant performance boost by utilizing only the top ranked positive examples from an initial nearest-neighbor list.


Read more from SRI