SOTAVerified

Training Vision Transformers for Image Retrieval

2021-02-10Code Available1· sign in to hype

Alaaeldin El-Nouby, Natalia Neverova, Ivan Laptev, Hervé Jégou

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformers have shown outstanding results for natural language understanding and, more recently, for image classification. We here extend this work and propose a transformer-based approach for image retrieval: we adopt vision transformers for generating image descriptors and train the resulting model with a metric learning objective, which combines a contrastive loss with a differential entropy regularizer. Our results show consistent and significant improvements of transformers over convolution-based approaches. In particular, our method outperforms the state of the art on several public benchmarks for category-level retrieval, namely Stanford Online Product, In-Shop and CUB-200. Furthermore, our experiments on ROxford and RParis also show that, in comparable settings, transformers are competitive for particular object retrieval, especially in the regime of short vector representations and low-resolution images.

Tasks

Reproductions