SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 201250 of 2759 papers

TitleStatusHype
Compressing Recurrent Neural Networks with Tensor Ring for Action RecognitionCode1
Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action LocalizationCode1
A Body Part Embedding Model With Datasets for Measuring 2D Human Motion SimilarityCode1
Concatenated Masked Autoencoders as Spatial-Temporal LearnerCode1
Elaborative Rehearsal for Zero-shot Action RecognitionCode1
Volterra Neural Networks (VNNs)Code1
Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action RecognitionCode1
CT-Net: Channel Tensorization Network for Video ClassificationCode1
HierVL: Learning Hierarchical Video-Language EmbeddingsCode1
A Large-scale Study of Spatiotemporal Representation Learning with a New Benchmark on Action RecognitionCode1
Encoding Surgical Videos as Latent Spatiotemporal Graphs for Object and Anatomy-Driven ReasoningCode1
Contrastive Learning from Spatio-Temporal Mixed Skeleton Sequences for Self-Supervised Skeleton-Based Action RecognitionCode1
Large Scale Holistic Video UnderstandingCode1
ConvNet Architecture Search for Spatiotemporal Feature LearningCode1
End-to-End Streaming Video Temporal Action Segmentation with Reinforce LearningCode1
A Lie Group Approach to Riemannian Batch NormalizationCode1
Hybrid Relation Guided Set Matching for Few-shot Action RecognitionCode1
HYperbolic Self-Paced Learning for Self-Supervised Skeleton-based Action RepresentationsCode1
A Closer Look at Spatiotemporal Convolutions for Action RecognitionCode1
Cross-Architecture Self-supervised Video Representation LearningCode1
Implicit Temporal Modeling with Learnable Alignment for Video RecognitionCode1
A Local-to-Global Approach to Multi-modal Movie Scene SegmentationCode1
CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensorsCode1
DailyDVS-200: A Comprehensive Benchmark Dataset for Event-Based Action RecognitionCode1
A Multigrid Method for Efficiently Training Video ModelsCode1
D^2ST-Adapter: Disentangled-and-Deformable Spatio-Temporal Adapter for Few-shot Action RecognitionCode1
Data Efficient Video Transformer for Violence DetectionCode1
An Action Is Worth Multiple Words: Handling Ambiguity in Action RecognitionCode1
DDGCN: A Dynamic Directed Graph Convolutional Network for Action RecognitionCode1
IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in VideosCode1
EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery VideosCode1
Attention-Based Context Aware Reasoning for Situation RecognitionCode1
Decoupling GCN with DropGraph Module for Skeleton-Based Action RecognitionCode1
Deep Analysis of CNN-based Spatio-temporal Representations for Action RecognitionCode1
ViViT: A Video Vision TransformerCode1
Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action RecognitionCode1
EgoNCE++: Do Egocentric Video-Language Models Really Understand Hand-Object Interactions?Code1
Isolated Sign Recognition from RGB Video using Pose Flow and Self-AttentionCode1
A Comprehensive Study of Deep Video Action RecognitionCode1
Discover and Mitigate Unknown Biases with Debiasing Alternate NetworksCode1
KNN-MMD: Cross Domain Wireless Sensing via Local Distribution AlignmentCode1
Depth Guided Adaptive Meta-Fusion Network for Few-shot Video RecognitionCode1
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the BackboneCode1
DirecFormer: A Directed Attention in Transformer Approach to Robust Action RecognitionCode1
Generative Action Description Prompts for Skeleton-based Action RecognitionCode1
DEVIAS: Learning Disentangled Video Representations of Action and SceneCode1
Disentangled Pre-training for Human-Object Interaction DetectionCode1
An Evaluation of Action Recognition Models on EPIC-KitchensCode1
Disentangled Non-Local Neural NetworksCode1
Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the MotionCode1
Show:102550
← PrevPage 5 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4InternVideoTop-1 Accuracy77.2Unverified
5DejaVidTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3OmniVec3-fold Accuracy99.6Unverified
4VideoMAE V2-g3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
8OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
9PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
10Text4Vis3-fold Accuracy98.2Unverified