SOTAVerified

A Straightforward Framework For Video Retrieval Using CLIP

2021-02-24Code Available1· sign in to hype

Jesús Andrés Portillo-Quintero, José Carlos Ortiz-Bayliss, Hugo Terashima-Marín

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Video Retrieval is a challenging task where a text query is matched to a video or vice versa. Most of the existing approaches for addressing such a problem rely on annotations made by the users. Although simple, this approach is not always feasible in practice. In this work, we explore the application of the language-image model, CLIP, to obtain video representations without the need for said annotations. This model was explicitly trained to learn a common space where images and text can be compared. Using various techniques described in this document, we extended its application to videos, obtaining state-of-the-art results on the MSR-VTT and MSVD benchmarks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LSMDCCLIPtext-to-video R@111.3Unverified
MSR-VTTCLIPtext-to-video R@121.4Unverified
MSR-VTT-1kACLIPtext-to-video R@131.2Unverified
MSVDCLIPtext-to-video R@137Unverified

Reproductions