SOTAVerified

The 2021 Image Similarity Dataset and Challenge

2021-06-17Code Available1· sign in to hype

Matthijs Douze, Giorgos Tolias, Ed Pizzi, Zoë Papakipos, Lowik Chanussot, Filip Radenovic, Tomas Jenicek, Maxim Maximov, Laura Leal-Taixé, Ismail Elezi, Ondřej Chum, Cristian Canton Ferrer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper introduces a new benchmark for large-scale image similarity detection. This benchmark is used for the Image Similarity Challenge at NeurIPS'21 (ISC2021). The goal is to determine whether a query image is a modified copy of any image in a reference corpus of size 1~million. The benchmark features a variety of image transformations such as automated transformations, hand-crafted image edits and machine-learning based manipulations. This mimics real-life cases appearing in social media, for example for integrity-related problems dealing with misinformation and objectionable content. The strength of the image manipulations, and therefore the difficulty of the benchmark, is calibrated according to the performance of a set of baseline approaches. Both the query and reference set contain a majority of "distractor" images that do not match, which corresponds to a real-life needle-in-haystack setting, and the evaluation metric reflects that. We expect the DISC21 benchmark to promote image copy detection as an important and challenging computer vision task and refresh the state of the art. Code and data are available at https://github.com/facebookresearch/isc2021

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
DISC21 devHOW+ASMKw/o normalization17.32Unverified
DISC21 devMultigrain 1500 dimw/o normalization16.47Unverified
DISC21 devGIST PCA 256w/o normalization15.56Unverified
DISC21 devGIST 960 dimw/o normalization14.42Unverified

Reproductions