SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 150 of 2759 papers

TitleStatusHype
InternVideo2: Scaling Foundation Models for Multimodal Video UnderstandingCode7
TimesNet: Temporal 2D-Variation Modeling for General Time Series AnalysisCode6
DPFlow: Adaptive Optical Flow Estimation with a Dual-Pyramid FrameworkCode4
SAT: Dynamic Spatial Aptitude Training for Multimodal Language ModelsCode4
Language Model Beats Diffusion -- Tokenizer is Key to Visual GenerationCode4
InternVideo: General Video Foundation Models via Generative and Discriminative LearningCode4
Advances in Multimodal Adaptation and Generalization: From Traditional Approaches to Foundation ModelsCode3
Harnessing Temporal Causality for Advanced Temporal Action DetectionCode3
Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with ObjectsCode3
Humans in 4D: Reconstructing and Tracking Humans with TransformersCode3
MotionBERT: A Unified Perspective on Learning Human Motion RepresentationsCode3
Expanding Language-Image Pretrained Models for General Video RecognitionCode3
A Survey on Video Action Recognition in Sports: Datasets, Methods and ApplicationsCode3
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-TrainingCode3
EfficientNet: Rethinking Model Scaling for Convolutional Neural NetworksCode3
Surg-3M: A Dataset and Foundation Model for Perception in Surgical SettingsCode2
LLaVAction: evaluating and training multi-modal large language models for action recognitionCode2
Revealing Key Details to See Differences: A Novel Prototypical Perspective for Skeleton-based Action RecognitionCode2
AWT: Transferring Vision-Language Models via Augmentation, Weighting, and TransportationCode2
EgoVideo: Exploring Egocentric Foundation Model and Downstream AdaptationCode2
Rethinking Efficient and Effective Point-based Networks for Event Camera Classification and Regression: EventMambaCode2
Leveraging Temporal Contextualization for Video Action RecognitionCode2
TIM: A Time Interval Machine for Audio-Visual Action RecognitionCode2
OmniVid: A Generative Framework for Universal Video UnderstandingCode2
Understanding Long Videos with Multimodal Language ModelsCode2
DeGCN: Deformable Graph Convolutional Networks for Skeleton-Based Action RecognitionCode2
vid-TLDR: Training Free Token merging for Light-weight Video TransformerCode2
Hierarchical NeuroSymbolic Approach for Comprehensive and Explainable Action Quality AssessmentCode2
SkateFormer: Skeletal-Temporal Transformer for Human Action RecognitionCode2
Dynamic 3D Point Cloud Sequences as 2D VideosCode2
Mamba-ND: Selective State Space Modeling for Multi-Dimensional DataCode2
FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action RecognitionCode2
BlockGCN: Redefine Topology Awareness for Skeleton-Based Action RecognitionCode2
Hulk: A Universal Knowledge Translator for Human-Centric TasksCode2
Is Weakly-supervised Action Segmentation Ready For Human-Robot Interaction? No, Let's Improve It With Action-union LearningCode2
Frozen Transformers in Language Models Are Effective Visual Encoder LayersCode2
Valley: Video Assistant with Large Language model Enhanced abilitYCode2
On the Benefits of 3D Pose and Tracking for Human Action RecognitionCode2
VideoMAE V2: Scaling Video Masked Autoencoders with Dual MaskingCode2
AIM: Adapting Image Models for Efficient Video Action RecognitionCode2
Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language ModelsCode2
Learning Video Representations from Large Language ModelsCode2
Deep Architectures for Content Moderation and Movie Content RatingCode2
UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormerCode2
Revisiting Classifier: Transferring Vision-Language Models for Video RecognitionCode2
Revealing Single Frame Bias for Video-and-Language LearningCode2
Egocentric Video-Language PretrainingCode2
AdaptFormer: Adapting Vision Transformers for Scalable Visual RecognitionCode2
ActionFormer: Localizing Moments of Actions with TransformersCode2
HAKE: A Knowledge Engine Foundation for Human Activity UnderstandingCode2
Show:102550
← PrevPage 1 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4DejaVidTop-1 Accuracy77.2Unverified
5InternVideoTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3VideoMAE V2-g3-fold Accuracy99.6Unverified
4OmniVec3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
8PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
9ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
10LGD-3D Two-stream3-fold Accuracy98.2Unverified