SOTAVerified

Anticipative Video Transformer

2021-06-03ICCV 2021Code Available1· sign in to hype

Rohit Girdhar, Kristen Grauman

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose Anticipative Video Transformer (AVT), an end-to-end attention-based video modeling architecture that attends to the previously observed video in order to anticipate future actions. We train the model jointly to predict the next action in a video sequence, while also learning frame feature encoders that are predictive of successive future frames' features. Compared to existing temporal aggregation strategies, AVT has the advantage of both maintaining the sequential progression of observed actions while still capturing long-range dependencies--both critical for the anticipation task. Through extensive experiments, we show that AVT obtains the best reported performance on four popular action anticipation benchmarks: EpicKitchens-55, EpicKitchens-100, EGTEA Gaze+, and 50-Salads; and it wins first place in the EpicKitchens-100 CVPR'21 challenge.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
EPIC-KITCHENS-100AVT+Recall@515.9Unverified
EPIC-KITCHENS-100 (test)AVT++recall@516.7Unverified
EPIC-KITCHENS-100 (test)AVT+recall@512.6Unverified
EPIC-KITCHENS-55 (Seen test set (S1))AVT+Top 1 Accuracy - Act.16.84Unverified
EPIC-KITCHENS-55 (Unseen test set (S2)AVT+Top 1 Accuracy - Act.10.41Unverified

Reproductions