SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 125 of 2759 papers

TitleStatusHype
InternVideo2: Scaling Foundation Models for Multimodal Video UnderstandingCode7
TimesNet: Temporal 2D-Variation Modeling for General Time Series AnalysisCode6
DPFlow: Adaptive Optical Flow Estimation with a Dual-Pyramid FrameworkCode4
SAT: Dynamic Spatial Aptitude Training for Multimodal Language ModelsCode4
Language Model Beats Diffusion -- Tokenizer is Key to Visual GenerationCode4
InternVideo: General Video Foundation Models via Generative and Discriminative LearningCode4
Harnessing Temporal Causality for Advanced Temporal Action DetectionCode3
Advances in Multimodal Adaptation and Generalization: From Traditional Approaches to Foundation ModelsCode3
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-TrainingCode3
MotionBERT: A Unified Perspective on Learning Human Motion RepresentationsCode3
EfficientNet: Rethinking Model Scaling for Convolutional Neural NetworksCode3
Humans in 4D: Reconstructing and Tracking Humans with TransformersCode3
Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with ObjectsCode3
A Survey on Video Action Recognition in Sports: Datasets, Methods and ApplicationsCode3
Expanding Language-Image Pretrained Models for General Video RecognitionCode3
Hierarchical NeuroSymbolic Approach for Comprehensive and Explainable Action Quality AssessmentCode2
Frozen Transformers in Language Models Are Effective Visual Encoder LayersCode2
FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action RecognitionCode2
HAKE: A Knowledge Engine Foundation for Human Activity UnderstandingCode2
Hulk: A Universal Knowledge Translator for Human-Centric TasksCode2
ActionFormer: Localizing Moments of Actions with TransformersCode2
Egocentric Video-Language PretrainingCode2
AdaptFormer: Adapting Vision Transformers for Scalable Visual RecognitionCode2
AIM: Adapting Image Models for Efficient Video Action RecognitionCode2
DeGCN: Deformable Graph Convolutional Networks for Skeleton-Based Action RecognitionCode2
Show:102550
← PrevPage 1 of 111Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4InternVideoTop-1 Accuracy77.2Unverified
5DejaVidTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3OmniVec3-fold Accuracy99.6Unverified
4VideoMAE V2-g3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
8OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
9PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
10Text4Vis3-fold Accuracy98.2Unverified