SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 851900 of 2759 papers

TitleStatusHype
Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens0
Action Recognition and State Change Prediction in a Recipe Understanding Task Using a Lightweight Neural Network Model0
Enhancing Human Action Recognition and Violence Detection Through Deep Learning Audiovisual Fusion0
FuTH-Net: Fusing Temporal Relations and Holistic Features for Aerial Video Classification0
GCF-Net: Gated Clip Fusion Network for Video Action Recognition0
Event and Activity Recognition in Video Surveillance for Cyber-Physical Systems0
Event-based Action Recognition Using Timestamp Image Encoding Network0
Event-based Timestamp Image Encoding Network for Human Action Recognition and Anticipation0
Global Context-Aware Attention LSTM Networks for 3D Action Recognition0
EventCrab: Harnessing Frame and Point Synergy for Event-based Action Recognition and Beyond0
Behavior Recognition Based on the Integration of Multigranular Motion Features0
EA-VTR: Event-Aware Video-Text Retrieval0
AdvIT: Adversarial Frames Identifier Based on Temporal Consistency in Videos0
Early Action Recognition with Action Prototypes0
EventTransAct: A video transformer-based framework for Event-camera based action recognition0
Event Transformer+. A multi-purpose solution for efficient event data processing0
Bayesian Non-Parametric Inference for Manifold Based MoCap Representation0
Bypass Enhancement RGB Stream Model for Pedestrian Action Recognition of Autonomous Vehicles0
FSD-10: A Dataset for Competitive Sports Content Analysis0
EAGLE: Egocentric AGgregated Language-video Engine0
Evolving Losses for Unsupervised Video Representation Learning0
Evolving Skeletons: Motion Dynamics in Action Recognition0
Adversarial Self-Supervised Learning for Semi-Supervised 3D Action Recognition0
DynamoNet: Dynamic Action and Motion Network0
Examining Interpretable Feature Relationships in Deep Networks for Action recognition0
CAMREP- Concordia Action and Motion Repository0
EXMOVES: Classifier-based Features for Scalable Action Recognition0
Egocentric and Exocentric Methods: A Short Survey0
Bayesian Graph Convolution LSTM for Skeleton Based Action Recognition0
MultiFuser: Multimodal Fusion Transformer for Enhanced Driver Action Recognition0
Expansion-Squeeze-Excitation Fusion Network for Elderly Activity Recognition0
Canonical Correlation Analysis for Misaligned Satellite Image Change Detection0
Fully-Coupled Two-Stream Spatiotemporal Networks for Extremely Low Resolution Action Recognition0
Exploiting deep residual networks for human action recognition from skeletal data0
Exploiting Inter-Frame Regional Correlation for Efficient Action Recognition0
Exploiting Motion Information from Unlabeled Videos for Static Image Action Recognition0
Exploiting Spatial-Temporal Modelling and Multi-Modal Fusion for Human Action Recognition0
Exploiting Structure Sparsity for Covariance-based Visual Representation0
Exploiting the ConvLSTM: Human Action Recognition using Raw Depth Video-Based Recurrent Neural Networks0
Dynamic Spatio-Temporal Specialization Learning for Fine-Grained Action Recognition0
Dynamic Spatial-temporal Hypergraph Convolutional Network for Skeleton-based Action Recognition0
Dynamic Sampling Networks for Efficient Action Recognition in Videos0
Exploring Missing Modality in Multimodal Egocentric Datasets0
CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition0
Exploring Relations in Untrimmed Videos for Self-Supervised Learning0
Exploring Sub-Pseudo Labels for Learning from Weakly-Labeled Web Videos0
Cascaded Interactional Targeting Network for Egocentric Video Analysis0
Exploring the Impact of Hand Pose and Shadow on Hand-washing Action Recognition0
CASPER: Cognitive Architecture for Social Perception and Engagement in Robots0
Dynamic Probabilistic Network Based Human Action Recognition0
Show:102550
← PrevPage 18 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4DejaVidTop-1 Accuracy77.2Unverified
5InternVideoTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3VideoMAE V2-g3-fold Accuracy99.6Unverified
4OmniVec3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
8PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
9ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
10LGD-3D Two-stream3-fold Accuracy98.2Unverified