SOTAVerified

MS-TCT: Multi-Scale Temporal ConvTransformer for Action Detection

2021-12-07CVPR 2022Code Available1· sign in to hype

Rui Dai, Srijan Das, Kumara Kahatapitiya, Michael S. Ryoo, Francois Bremond

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Action detection is an essential and challenging task, especially for densely labelled datasets of untrimmed videos. The temporal relation is complex in those datasets, including challenges like composite action, and co-occurring action. For detecting actions in those complex videos, efficiently capturing both short-term and long-term temporal information in the video is critical. To this end, we propose a novel ConvTransformer network for action detection. This network comprises three main components: (1) Temporal Encoder module extensively explores global and local temporal relations at multiple temporal resolutions. (2) Temporal Scale Mixer module effectively fuses the multi-scale features to have a unified feature representation. (3) Classification module is used to learn the instance center-relative position and predict the frame-level classification scores. The extensive experiments on multiple datasets, including Charades, TSU and MultiTHUMOS, confirm the effectiveness of our proposed method. Our network outperforms the state-of-the-art methods on all three datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CharadesMS-TCT (RGB only)mAP25.4Unverified
Multi-THUMOSMS-TCT (RGB only)mAP43.1Unverified
TSUMS-TCTFrame-mAP33.7Unverified

Reproductions