SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 19512000 of 2759 papers

TitleStatusHype
Towards Universal Representation for Unseen Action Recognition0
Tracking Human Pose by Tracking Symmetric Parts0
Train, Diagnose and Fix: Interpretable Approach for Fine-grained Action Recognition0
Training for temporal sparsity in deep neural networks, application in video processing0
Trajectory Aligned Features For First Person Action Recognition0
Trajectory-aligned Space-time Tokens for Few-shot Action Recognition0
Trajectory Convolution for Action Recognition0
Transductive Universal Transport for Zero-Shot Action Recognition0
Transductive Zero-Shot Action Recognition by Word-Vector Embedding0
Transferable Feature Representation for Visible-to-Infrared Cross-Dataset Human Action Recognition0
Transformed ROIs for Capturing Visual Transformations in Videos0
Transformer-based Action recognition in hand-object interacting scenarios0
Transformer-based Fusion of 2D-pose and Spatio-temporal Embeddings for Distracted Driver Action Recognition0
Transformers in Action Recognition: A Review on Temporal Modeling0
Transformers in Vision: A Survey0
Transition Forests: Learning Discriminative Temporal Transitions for Action Recognition and Detection0
TransNet: A Transfer Learning-Based Network for Human Action Recognition0
Trear: Transformer-based RGB-D Egocentric Action Recognition0
T-RECS: Training for Rate-Invariant Embeddings by Controlling Speed for Action Recognition0
Trimmed Action Recognition, Dense-Captioning Events in Videos, and Spatio-temporal Action Localization with Focus on ActivityNet Challenge 20190
Trunk-branch Contrastive Network with Multi-view Deformable Aggregation for Multi-view Action Recognition0
TSI: Temporal Saliency Integration for Video Action Recognition0
TTPOINT: A Tensorized Point Cloud Network for Lightweight Action Recognition with Event Cameras0
Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action Recognition0
Two-stream joint matching method based on contrastive learning for few-shot action recognition0
Two Stream LSTM: A Deep Fusion Framework for Human Action Recognition0
Two-stream Multi-level Dynamic Point Transformer for Two-person Interaction Recognition0
Two-Stream Networks for Lane-Change Prediction of Surrounding Vehicles0
Two-Stream Region Convolutional 3D Network for Temporal Activity Detection0
Two-Stream RNN/CNN for Action Recognition in 3D Videos0
Two Stream Self-Supervised Learning for Action Recognition0
Two-stream Spatiotemporal Feature for Video QA Task0
Two-Stream Transformer Architecture for Long Video Understanding0
Two-Stream Video Classification with Cross-Modality Attention0
UC Merced Submission to the ActivityNet Challenge 20160
Uncertainty-Aware Weakly Supervised Action Detection from Untrimmed Videos0
Uncertainty-Guided Probabilistic Transformer for Complex Action Recognition0
Uncertainty-sensitive Activity Recognition: a Reliability Benchmark and the CARING Models0
Understanding Ethics, Privacy, and Regulations in Smart Video Surveillance for Public Safety0
Understanding Spatio-Temporal Relations in Human-Object Interaction using Pyramid Graph Convolutional Network0
Understanding the Cross-Domain Capabilities of Video-Based Few-Shot Action Recognition Models0
Understanding Video Transformers via Universal Concept Discovery0
Unfolding Videos Dynamics via Taylor Expansion0
Unified Contrastive Fusion Transformer for Multimodal Human Action Recognition0
Unified Keypoint-based Action Recognition Framework via Structured Keypoint Pooling0
Unified Pose Sequence Modeling0
Unifying Few- and Zero-Shot Egocentric Action Recognition0
UniHPE: Towards Unified Human Pose Estimation via Contrastive Learning0
Universal Prototype Transport for Zero-Shot Action Recognition and Localization0
Universal-to-Specific Framework for Complex Action Recognition0
Show:102550
← PrevPage 40 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4DejaVidTop-1 Accuracy77.2Unverified
5InternVideoTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3VideoMAE V2-g3-fold Accuracy99.6Unverified
4OmniVec3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
8PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
9ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
10LGD-3D Two-stream3-fold Accuracy98.2Unverified