SOTAVerified

Temporal Action Localization

Temporal Action Localization aims to detect activities in the video stream and output beginning and end timestamps. It is closely related to Temporal Action Proposal Generation.

Papers

Showing 851900 of 1477 papers

TitleStatusHype
Tensor Representations via Kernel Linearization for Action Recognition from 3D Skeletons (Extended Version)0
Text-Enhanced Zero-Shot Action Recognition: A training-free approach0
Theater Aid System for the Visually Impaired Through Transfer Learning of Spatio-Temporal Graph Convolution Networks0
The Best of Both Worlds: Combining Data-independent and Data-driven Approaches for Action Recognition0
The Globally Optimal Reparameterization Algorithm: an Alternative to Fast Dynamic Time Warping for Action Recognition in Video Sequences0
The Imaginative Generative Adversarial Network: Automatic Data Augmentation for Dynamic Skeleton-Based Hand Gesture and Human Action Recognition0
The THUMOS Challenge on Action Recognition for Videos "in the Wild"0
Thin-Slicing for Pose: Learning to Understand Pose Without Explicit Pose Estimation0
Three Birds with One Stone: Multi-Task Temporal Action Detection via Recycling Temporal Annotations0
Three Branches: Detecting Actions With Richer Features0
Three-stream network for enriched Action Recognition0
Advancing Human Action Recognition with Foundation Models trained on Unlabeled Public Videos0
Time Series Classification using the Hidden-Unit Logistic Model0
Top-down Attention Recurrent VLAD Encoding for Action Recognition in Videos0
Towards Adaptive Pseudo-label Learning for Semi-Supervised Temporal Action Localization0
Towards an Unequivocal Representation of Actions0
Towards a Skeleton-Based Action Recognition For Realistic Scenarios0
Towards Automatic Speech Identification from Vocal Tract Shape Dynamics in Real-time MRI0
Towards Good Practices for Action Video Encoding0
Towards Improved Human Action Recognition Using Convolutional Neural Networks and Multimodal Fusion of Depth and Inertial Sensor Data0
Towards Universal Representation for Unseen Action Recognition0
Tracking Human Pose by Tracking Symmetric Parts0
Train, Diagnose and Fix: Interpretable Approach for Fine-grained Action Recognition0
Training for temporal sparsity in deep neural networks, application in video processing0
Trajectory Aligned Features For First Person Action Recognition0
Trajectory Convolution for Action Recognition0
Transductive Zero-Shot Action Recognition by Word-Vector Embedding0
Transferable Feature Representation for Visible-to-Infrared Cross-Dataset Human Action Recognition0
Transferable Knowledge-Based Multi-Granularity Aggregation Network for Temporal Action Localization: Submission to ActivityNet Challenge 20210
Transformer-based Fusion of 2D-pose and Spatio-temporal Embeddings for Distracted Driver Action Recognition0
Transition Forests: Learning Discriminative Temporal Transitions for Action Recognition and Detection0
TransNet: A Transfer Learning-Based Network for Human Action Recognition0
T-RECS: Training for Rate-Invariant Embeddings by Controlling Speed for Action Recognition0
Trimmed Action Recognition, Dense-Captioning Events in Videos, and Spatio-temporal Action Localization with Focus on ActivityNet Challenge 20190
TSI: Temporal Saliency Integration for Video Action Recognition0
TUHOI: Trento Universal Human Object Interaction Dataset0
Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action Recognition0
Two-Stream Consensus Network for Weakly-Supervised Temporal Action Localization0
Two-Stream Consensus Network: Submission to HACS Challenge 2021 Weakly-Supervised Learning Track0
Two Stream LSTM: A Deep Fusion Framework for Human Action Recognition0
Two-stream Multi-level Dynamic Point Transformer for Two-person Interaction Recognition0
Two-Stream Networks for Lane-Change Prediction of Surrounding Vehicles0
Two-Stream Networks for Weakly-Supervised Temporal Action Localization With Semantic-Aware Mechanisms0
Two-Stream RNN/CNN for Action Recognition in 3D Videos0
Two Stream Self-Supervised Learning for Action Recognition0
Two-stream Spatiotemporal Feature for Video QA Task0
UC Merced Submission to the ActivityNet Challenge 20160
Unified Contrastive Fusion Transformer for Multimodal Human Action Recognition0
Unified Keypoint-based Action Recognition Framework via Structured Keypoint Pooling0
UnLoc: A Unified Framework for Video Localization Tasks0
Show:102550
← PrevPage 18 of 30Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AdaTAD (VideoMAEv2-giant)Avg mAP (0.3:0.7)76.9Unverified
2RDFA-S6 (InternVideo2-6B)Avg mAP (0.3:0.7)74.2Unverified
3ActionMamba(InternVideo2-6B)Avg mAP (0.3:0.7)72.72Unverified
4GCMmAP IOU@0.172.5Unverified
5AGT (Ours)mAP IOU@0.172.1Unverified
6InternVideo2-6BAvg mAP (0.3:0.7)72Unverified
7ActionFormer (InternVideo features)Avg mAP (0.3:0.7)71.58Unverified
8TriDet (VideoMAE v2-g feature)Avg mAP (0.3:0.7)70.1Unverified
9InternVideo2-1BAvg mAP (0.3:0.7)69.8Unverified
10ActionFormer (VideoMAE V2-g features)Avg mAP (0.3:0.7)69.6Unverified
#ModelMetricClaimedVerifiedStatus
1UnLoc-LmAP IOU@0.559.3Unverified
2RDFA-S6 (InternVideo2-6B)mAP42.9Unverified
3ActionMamba (InternVideo2-6B)mAP42.02Unverified
4PRN+BMN (ensemble)mAP42Unverified
5AdaTAD (VideoMAEv2-giant)mAP41.93Unverified
6InternVideo2-6BmAP41.2Unverified
7InternVideo2-1BmAP40.4Unverified
8UniMD+Sync.mAP39.83Unverified
9PRN (CSN)mAP39.4Unverified
10InternVideomAP39Unverified
#ModelMetricClaimedVerifiedStatus
1RDFA-S6 (InternVideo2-6B)Average-mAP45.8Unverified
2ActionMamba(InternVideo2-6B)Average-mAP44.56Unverified
3DyFADet(VideoMAEv2)Average-mAP44.3Unverified
4InternVideo2-6BAverage-mAP43.3Unverified
5TriDet (VideoMAEv2)Average-mAP43.1Unverified
6InternVideo2-1BAverage-mAP42.4Unverified
7InternVideoAverage-mAP41.55Unverified
8TriDet (SlowFast)Average-mAP38.6Unverified
9TriDet (I3D RGB)Average-mAP36.8Unverified
10TadTr (I3D RGB)Average-mAP32.09Unverified
#ModelMetricClaimedVerifiedStatus
1RDFA-S6 (InternVideo2-6B)mAP29.6Unverified
2ActionMamba(InternVideo2-6B)mAP29.04Unverified
3InternVideo2-6BmAP27.7Unverified
4DyFADet (VideoMAE v2-g)mAP23.8Unverified
5VideoMAE V2-gmAP18.24Unverified
6InternVideomAP17.57Unverified
7BMN (i3d feaure)mAP9.25Unverified
8G-TAD (i3d feature)mAP9.06Unverified
9DBG (i3d feature)mAP6.75Unverified
#ModelMetricClaimedVerifiedStatus
1TriDet (VideoMAEv2)Average mAP37.5Unverified
2DualDETR (I3D-rgb)Average mAP32.64Unverified
3TriDet (I3D-rgb)Average mAP30.7Unverified
4TemporalMaxerAverage mAP29.9Unverified
5PointTADAverage mAP23.5Unverified
6PDANAverage mAP17.3Unverified
7MS-TCTAverage mAP16.2Unverified
8MLADAverage mAP14.2Unverified
#ModelMetricClaimedVerifiedStatus
1VideoCLIPRecall47.3Unverified
2VLMRecall46.5Unverified
3TACoRecall42.5Unverified
4Text-Video EmbeddingRecall33.6Unverified
5Fully-supervised upper-boundRecall31.6Unverified
6ZhukovRecall22.4Unverified
7AlayracRecall13.3Unverified
#ModelMetricClaimedVerifiedStatus
1AdaTAD (verb, VideoMAE-L)Avg mAP (0.1-0.5)29.3Unverified
2TriDet (verb)Avg mAP (0.1-0.5)25.4Unverified
3TemporalMaxer (verb)Avg mAP (0.1-0.5)24.5Unverified
4ActionFormer (verb)Avg mAP (0.1-0.5)23.5Unverified
5G-TAD (verb)Avg mAP (0.1-0.5)9.4Unverified
6BMN (verb)Avg mAP (0.1-0.5)8.4Unverified
#ModelMetricClaimedVerifiedStatus
1TemporalMaxermAP27.2Unverified
2MUSESmAP18.6Unverified
#ModelMetricClaimedVerifiedStatus
1DeepMetricLearnermAP IOU@0.535.2Unverified
#ModelMetricClaimedVerifiedStatus
1ActionFormer (SlowFast+Omnivore+EgoVLP)Average mAP21.76Unverified
#ModelMetricClaimedVerifiedStatus
1ActionFormer (SlowFast+Omnivore+EgoVLP)Average mAP21.4Unverified
#ModelMetricClaimedVerifiedStatus
1S-CNNmAP7.4Unverified