SOTAVerified

Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning

2022-12-06CVPR 2023Code Available1· sign in to hype

AJ Piergiovanni, Weicheng Kuo, Anelia Angelova

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a simple approach which can turn a ViT encoder into an efficient video model, which can seamlessly work with both image and video inputs. By sparsely sampling the inputs, the model is able to do training and inference from both inputs. The model is easily scalable and can be adapted to large-scale pre-trained ViTs without requiring full finetuning. The model achieves SOTA results and the code will be open-sourced.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Something-Something V2TubeViT-LTop-1 Accuracy76.1Unverified

Reproductions