SOTAVerified

Hallucinating Saliency Maps for Fine-Grained Image Classification for Limited Data Domains

2020-07-24Unverified0· sign in to hype

Carola Figueroa-Flores, Bogdan Raducanu, David Berga, Joost Van de Weijer

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Most of the saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline, like for instance, image classification. In the current paper, we propose an approach which does not require explicit saliency maps to improve image classification, but they are learned implicitely, during the training of an end-to-end image classification task. We show that our approach obtains similar results as the case when the saliency maps are provided explicitely. Combining RGB data with saliency maps represents a significant advantage for object recognition, especially for the case when training data is limited. We validate our method on several datasets for fine-grained classification tasks (Flowers, Birds and Cars). In addition, we show that our saliency estimation method, which is trained without any saliency groundtruth data, obtains competitive results on real image saliency benchmark (Toronto), and outperforms deep saliency models with synthetic images (SID4VAM).

Tasks

Reproductions