SOTAVerified

CLIP2Video: Mastering Video-Text Retrieval via Image CLIP

2021-06-21Code Available1· sign in to hype

Han Fang, Pengfei Xiong, Luhui Xu, Yu Chen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present CLIP2Video network to transfer the image-language pre-training model to video-text retrieval in an end-to-end manner. Leading approaches in the domain of video-and-language learning try to distill the spatio-temporal video features and multi-modal interaction between videos and languages from a large-scale video-text dataset. Different from them, we leverage pretrained image-language model, simplify it as a two-stage framework with co-learning of image-text and enhancing temporal relations between video frames and video-text respectively, make it able to train on comparatively small datasets. Specifically, based on the spatial semantics captured by Contrastive Language-Image Pretraining (CLIP) model, our model involves a Temporal Difference Block to capture motions at fine temporal video frames, and a Temporal Alignment Block to re-align the tokens of video clips and phrases and enhance the multi-modal correlation. We conduct thorough ablation studies, and achieve state-of-the-art performance on major text-to-video and video-to-text retrieval benchmarks, including new records of retrieval accuracy on MSR-VTT, MSVD and VATEX.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MSR-VTTCLIP2Videotext-to-video R@129.8Unverified
MSR-VTT-1kACLIP2Videotext-to-video R@145.6Unverified
VATEXCLIP2Videotext-to-video R@157.3Unverified

Reproductions