SOTAVerified

Local Features and Visual Words Emerge in Activations

2019-06-01CVPR 2019Code Available0· sign in to hype

Oriane Simeoni, Yannis Avrithis, Ondrej Chum

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a novel method of deep spatial matching (DSM) for image retrieval. Initial ranking is based on image descriptors extracted from convolutional neural network activations by global pooling, as in recent state-of-the-art work. However, the same sparse 3D activation tensor is also approximated by a collection of local features. These local features are then robustly matched to approximate the optimal alignment of the tensors. This happens without any network modification, additional layers or training. No local feature detection happens on the original image. No local feature descriptors and no visual vocabulary are needed throughout the whole process. We experimentally show that the proposed method achieves the state-of-the-art performance on standard benchmarks across different network architectures and different global pooling methods. The highest gain in performance is achieved when diffusion on the nearest-neighbor graph of global descriptors is initiated from spatially verified images.

Tasks

Reproductions