SOTAVerified

VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning

2021-06-21Code Available1· sign in to hype

Hao Tan, Jie Lei, Thomas Wolf, Mohit Bansal

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Video understanding relies on perceiving the global content and modeling its internal connections (e.g., causality, movement, and spatio-temporal correspondence). To learn these interactions, we apply a mask-then-predict pre-training task on discretized video tokens generated via VQ-VAE. Unlike language, where the text tokens are more independent, neighboring video tokens typically have strong correlations (e.g., consecutive video frames usually look very similar), and hence uniformly masking individual tokens will make the task too trivial to learn useful representations. To deal with this issue, we propose a block-wise masking strategy where we mask neighboring video tokens in both spatial and temporal domains. We also add an augmentation-free contrastive learning method to further capture the global content by predicting whether the video clips are sampled from the same video. We pre-train our model on uncurated videos and show that our pre-trained model can reach state-of-the-art results on several video understanding datasets (e.g., SSV2, Diving48). Lastly, we provide detailed analyses on model scalability and pre-training method design. Code is released at https://github.com/airsplay/vimpac.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Diving-48VIMPACAccuracy85.5Unverified
HMDB-51VIMPACAverage accuracy of 3 splits65.9Unverified
Something-Something V2VIMPACTop-1 Accuracy68.1Unverified
UCF101VIMPAC3-fold Accuracy92.7Unverified

Reproductions