SOTAVerified

Multiview Transformers for Video Recognition

2022-01-12CVPR 2022Code Available0· sign in to hype

Shen Yan, Xuehan Xiong, Anurag Arnab, Zhichao Lu, Mi Zhang, Chen Sun, Cordelia Schmid

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Video understanding requires reasoning at multiple spatiotemporal resolutions -- from short fine-grained motions to events taking place over longer durations. Although transformer architectures have recently advanced the state-of-the-art, they have not explicitly modelled different spatiotemporal resolutions. To this end, we present Multiview Transformers for Video Recognition (MTV). Our model consists of separate encoders to represent different views of the input video with lateral connections to fuse information across views. We present thorough ablation studies of our model and show that MTV consistently performs better than single-view counterparts in terms of accuracy and computational cost across a range of model sizes. Furthermore, we achieve state-of-the-art results on six standard datasets, and improve even further with large-scale pretraining. Code and checkpoints are available at: https://github.com/google-research/scenic/tree/main/scenic/projects/mtv.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
EPIC-KITCHENS-100MTV-B (WTS 60M)Action@150.5Unverified
Something-Something V2MTV-BTop-1 Accuracy68.5Unverified

Reproductions