SOTAVerified

TadML: A fast temporal action detection with Mechanics-MLP

2022-06-07Code Available0· sign in to hype

Bowen Deng, Dongchang Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Temporal Action Detection(TAD) is a crucial but challenging task in video understanding.It is aimed at detecting both the type and start-end frame for each action instance in a long, untrimmed video.Most current models adopt both RGB and Optical-Flow streams for the TAD task. Thus, original RGB frames must be converted manually into Optical-Flow frames with additional computation and time cost, which is an obstacle to achieve real-time processing. At present, many models adopt two-stage strategies, which would slow the inference speed down and complicatedly tuning on proposals generating.By comparison, we propose a one-stage anchor-free temporal localization method with RGB stream only, in which a novel Newtonian Mechanics-MLP architecture is established. It has comparable accuracy with all existing state-of-the-art models, while surpasses the inference speed of these methods by a large margin. The typical inference speed in this paper is astounding 4.44 video per second on THUMOS14. In applications, because there is no need to convert optical flow, the inference speed will be faster.It also proves that MLP has great potential in downstream tasks such as TAD. The source code is available at https://github.com/BonedDeng/TadML

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
THUMOS' 14TadML-two streammAP59.7Unverified
THUMOS' 14TadML-rgbmAP53.46Unverified

Reproductions