SOTAVerified

ViViT: A Video Vision Transformer

2021-03-29ICCV 2021Code Available1· sign in to hype

Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks. To facilitate further research, we release code at https://github.com/google-research/scenic/tree/main/scenic/projects/vivit

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
EPIC-KITCHENS-100ViViT-L/16x2 Fact. encoderAction@144Unverified
Something-Something V2ViViT-L/16x2 Fact. encoderTop-1 Accuracy65.4Unverified

Reproductions