SOTAVerified

Multiscale Vision Transformers

2021-04-22ICCV 2021Code Available1· sign in to hype

Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, Christoph Feichtenhofer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present Multiscale Vision Transformers (MViT) for video and image recognition, by connecting the seminal idea of multiscale feature hierarchies with transformer models. Multiscale Transformers have several channel-resolution scale stages. Starting from the input resolution and a small channel dimension, the stages hierarchically expand the channel capacity while reducing the spatial resolution. This creates a multiscale pyramid of features with early layers operating at high spatial resolution to model simple low-level visual information, and deeper layers at spatially coarse, but complex, high-dimensional features. We evaluate this fundamental architectural prior for modeling the dense nature of visual signals for a variety of video recognition tasks where it outperforms concurrent vision transformers that rely on large scale external pre-training and are 5-10x more costly in computation and parameters. We further remove the temporal dimension and apply our model for image classification where it outperforms prior work on vision transformers. Code is available at: https://github.com/facebookresearch/SlowFast

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AVA v2.2MViT-B-24, 32x3 (Kinetics-600 pretraining)mAP28.7Unverified
AVA v2.2MViT-B, 32x3 (Kinetics-500 pretraining)mAP27.5Unverified
AVA v2.2MViT-B, 64x3 (Kinetics-400 pretraining)mAP27.3Unverified
AVA v2.2MViT-B, 32x3 (Kinetics-400 pretraining)mAP26.8Unverified
AVA v2.2MViT-B, 16x4 (Kinetics-600 pretraining)mAP26.1Unverified
AVA v2.2MViT-B, 16x4 (Kinetics-400 pretraining)mAP24.5Unverified
Something-Something V2MViT-B, 16x4Top-1 Accuracy66.2Unverified
Something-Something V2MViT-B-24, 32x3Top-1 Accuracy68.7Unverified
Something-Something V2MViT-B, 32x3(Kinetics600 pretrain)Top-1 Accuracy67.8Unverified

Reproductions