SOTAVerified

Temporal Action Localization

Temporal Action Localization aims to detect activities in the video stream and output beginning and end timestamps. It is closely related to Temporal Action Proposal Generation.

Papers

Showing 901950 of 1477 papers

TitleStatusHype
Towards Good Practices for Action Video Encoding0
Towards Improved Human Action Recognition Using Convolutional Neural Networks and Multimodal Fusion of Depth and Inertial Sensor Data0
Towards Universal Representation for Unseen Action Recognition0
Tracking Human Pose by Tracking Symmetric Parts0
Train, Diagnose and Fix: Interpretable Approach for Fine-grained Action Recognition0
Training for temporal sparsity in deep neural networks, application in video processing0
Trajectory Aligned Features For First Person Action Recognition0
Trajectory Convolution for Action Recognition0
Transductive Zero-Shot Action Recognition by Word-Vector Embedding0
Transferable Feature Representation for Visible-to-Infrared Cross-Dataset Human Action Recognition0
Transferable Knowledge-Based Multi-Granularity Aggregation Network for Temporal Action Localization: Submission to ActivityNet Challenge 20210
Transformer-based Fusion of 2D-pose and Spatio-temporal Embeddings for Distracted Driver Action Recognition0
Transition Forests: Learning Discriminative Temporal Transitions for Action Recognition and Detection0
TransNet: A Transfer Learning-Based Network for Human Action Recognition0
T-RECS: Training for Rate-Invariant Embeddings by Controlling Speed for Action Recognition0
Trimmed Action Recognition, Dense-Captioning Events in Videos, and Spatio-temporal Action Localization with Focus on ActivityNet Challenge 20190
TSI: Temporal Saliency Integration for Video Action Recognition0
TUHOI: Trento Universal Human Object Interaction Dataset0
Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action Recognition0
Two-Stream Consensus Network for Weakly-Supervised Temporal Action Localization0
Two-Stream Consensus Network: Submission to HACS Challenge 2021 Weakly-Supervised Learning Track0
Two Stream LSTM: A Deep Fusion Framework for Human Action Recognition0
Two-stream Multi-level Dynamic Point Transformer for Two-person Interaction Recognition0
Two-Stream Networks for Lane-Change Prediction of Surrounding Vehicles0
Two-Stream Networks for Weakly-Supervised Temporal Action Localization With Semantic-Aware Mechanisms0
Two-Stream RNN/CNN for Action Recognition in 3D Videos0
Two Stream Self-Supervised Learning for Action Recognition0
Two-stream Spatiotemporal Feature for Video QA Task0
UC Merced Submission to the ActivityNet Challenge 20160
Unified Contrastive Fusion Transformer for Multimodal Human Action Recognition0
Unified Keypoint-based Action Recognition Framework via Structured Keypoint Pooling0
UnLoc: A Unified Framework for Video Localization Tasks0
Unrepresentative video data: A review and evaluation0
Unseen Action Recognition with Unpaired Adversarial Multimodal Learning0
Unsupervised Action Proposal Ranking through Proposal Recombination0
Unsupervised Domain Adaptation for Action Recognition via Self-Ensembling and Conditional Embedding Alignment0
Unsupervised Domain Adaptation for Spatio-Temporal Action Localization0
Unsupervised Domain Adaptation for Zero-Shot Learning0
Unsupervised Spectral Dual Assignment Clustering of Human Actions in Context0
Using joint angles based on the international biomechanical standards for human action recognition and related tasks0
Variational Conditional Dependence Hidden Markov Models for Skeleton-Based Action Recognition0
Video action recognition for lane-change classification and prediction of surrounding vehicles0
Video Action Recognition Using spatio-temporal optical flow video frames0
Video Action Recognition Via Neural Architecture Searching0
Video Action Recognition with Attentive Semantic Units0
Video-Based Action Recognition Using Rate-Invariant Analysis of Covariance Trajectories0
Video-based Human Action Recognition using Deep Learning: A Review0
Video-based Person Re-identification via 3D Convolutional Networks and Non-local Attention0
Perceptron Synthesis Network: Rethinking the Action Scale Variances in Videos0
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding0
Show:102550
← PrevPage 19 of 30Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AdaTAD (VideoMAEv2-giant)Avg mAP (0.3:0.7)76.9Unverified
2RDFA-S6 (InternVideo2-6B)Avg mAP (0.3:0.7)74.2Unverified
3ActionMamba(InternVideo2-6B)Avg mAP (0.3:0.7)72.72Unverified
4GCMmAP IOU@0.172.5Unverified
5AGT (Ours)mAP IOU@0.172.1Unverified
6InternVideo2-6BAvg mAP (0.3:0.7)72Unverified
7ActionFormer (InternVideo features)Avg mAP (0.3:0.7)71.58Unverified
8TriDet (VideoMAE v2-g feature)Avg mAP (0.3:0.7)70.1Unverified
9InternVideo2-1BAvg mAP (0.3:0.7)69.8Unverified
10ActionFormer (VideoMAE V2-g features)Avg mAP (0.3:0.7)69.6Unverified
#ModelMetricClaimedVerifiedStatus
1UnLoc-LmAP IOU@0.559.3Unverified
2RDFA-S6 (InternVideo2-6B)mAP42.9Unverified
3ActionMamba (InternVideo2-6B)mAP42.02Unverified
4PRN+BMN (ensemble)mAP42Unverified
5AdaTAD (VideoMAEv2-giant)mAP41.93Unverified
6InternVideo2-6BmAP41.2Unverified
7InternVideo2-1BmAP40.4Unverified
8UniMD+Sync.mAP39.83Unverified
9PRN (CSN)mAP39.4Unverified
10InternVideomAP39Unverified
#ModelMetricClaimedVerifiedStatus
1RDFA-S6 (InternVideo2-6B)Average-mAP45.8Unverified
2ActionMamba(InternVideo2-6B)Average-mAP44.56Unverified
3DyFADet(VideoMAEv2)Average-mAP44.3Unverified
4InternVideo2-6BAverage-mAP43.3Unverified
5TriDet (VideoMAEv2)Average-mAP43.1Unverified
6InternVideo2-1BAverage-mAP42.4Unverified
7InternVideoAverage-mAP41.55Unverified
8TriDet (SlowFast)Average-mAP38.6Unverified
9TriDet (I3D RGB)Average-mAP36.8Unverified
10TadTr (I3D RGB)Average-mAP32.09Unverified
#ModelMetricClaimedVerifiedStatus
1RDFA-S6 (InternVideo2-6B)mAP29.6Unverified
2ActionMamba(InternVideo2-6B)mAP29.04Unverified
3InternVideo2-6BmAP27.7Unverified
4DyFADet (VideoMAE v2-g)mAP23.8Unverified
5VideoMAE V2-gmAP18.24Unverified
6InternVideomAP17.57Unverified
7BMN (i3d feaure)mAP9.25Unverified
8G-TAD (i3d feature)mAP9.06Unverified
9DBG (i3d feature)mAP6.75Unverified
#ModelMetricClaimedVerifiedStatus
1TriDet (VideoMAEv2)Average mAP37.5Unverified
2DualDETR (I3D-rgb)Average mAP32.64Unverified
3TriDet (I3D-rgb)Average mAP30.7Unverified
4TemporalMaxerAverage mAP29.9Unverified
5PointTADAverage mAP23.5Unverified
6PDANAverage mAP17.3Unverified
7MS-TCTAverage mAP16.2Unverified
8MLADAverage mAP14.2Unverified
#ModelMetricClaimedVerifiedStatus
1VideoCLIPRecall47.3Unverified
2VLMRecall46.5Unverified
3TACoRecall42.5Unverified
4Text-Video EmbeddingRecall33.6Unverified
5Fully-supervised upper-boundRecall31.6Unverified
6ZhukovRecall22.4Unverified
7AlayracRecall13.3Unverified
#ModelMetricClaimedVerifiedStatus
1AdaTAD (verb, VideoMAE-L)Avg mAP (0.1-0.5)29.3Unverified
2TriDet (verb)Avg mAP (0.1-0.5)25.4Unverified
3TemporalMaxer (verb)Avg mAP (0.1-0.5)24.5Unverified
4ActionFormer (verb)Avg mAP (0.1-0.5)23.5Unverified
5G-TAD (verb)Avg mAP (0.1-0.5)9.4Unverified
6BMN (verb)Avg mAP (0.1-0.5)8.4Unverified
#ModelMetricClaimedVerifiedStatus
1TemporalMaxermAP27.2Unverified
2MUSESmAP18.6Unverified
#ModelMetricClaimedVerifiedStatus
1DeepMetricLearnermAP IOU@0.535.2Unverified
#ModelMetricClaimedVerifiedStatus
1ActionFormer (SlowFast+Omnivore+EgoVLP)Average mAP21.76Unverified
#ModelMetricClaimedVerifiedStatus
1ActionFormer (SlowFast+Omnivore+EgoVLP)Average mAP21.4Unverified
#ModelMetricClaimedVerifiedStatus
1S-CNNmAP7.4Unverified