SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 10511100 of 2759 papers

TitleStatusHype
ActAR: Actor-Driven Pose Embeddings for Video Action Recognition0
Animal Kingdom: A Large and Diverse Dataset for Animal Behavior UnderstandingCode1
Invisible-to-Visible: Privacy-Aware Human Instance Segmentation using Airborne Ultrasound via Collaborative Learning Variational Autoencoder0
Model-agnostic Multi-Domain Learning with Domain-Specific Adapters for Action Recognition0
3D Convolutional Networks for Action Recognition: Application to Sport Gesture Recognition0
Towards End-to-End Integration of Dialog History for Improved Spoken Language Understanding0
Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates0
SOS! Self-supervised Learning Over Sets Of Handled Objects In Egocentric Action Recognition0
Frequency Selective Augmentation for Video Representation Learning0
Probabilistic Representations for Video Contrastive Learning0
Hierarchical Self-supervised Representation Learning for Movie Understanding0
Temporal Alignment Networks for Long-term VideoCode1
MM-SEAL: A Large-scale Video Dataset of Multi-person Multi-grained Spatio-temporally Action Localization0
OccamNets: Mitigating Dataset Bias by Favoring Simpler HypothesesCode0
TALLFormer: Temporal Action Localization with a Long-memory TransformerCode1
Direct Dense Pose Estimation0
Vision Transformer with Cross-attention by Temporal Shift for Efficient Action Recognition0
ObjectMix: Data Augmentation by Copy-Pasting Objects in Videos for Action Recognition0
Stochastic Backpropagation: A Memory Efficient Strategy for Training Video ModelsCode1
SpatioTemporal Focus for Skeleton-based Action Recognition0
Controllable Augmentations for Video Representation Learning0
CycDA: Unsupervised Cycle Domain Adaptation from Image to VideoCode0
SPAct: Self-supervised Privacy Preservation for Action RecognitionCode1
Rethinking Zero-shot Action Recognition: Learning from Latent Atomic ActionsCode0
Assembly101: A Large-Scale Multi-View Video Dataset for Understanding Procedural ActivitiesCode0
Class-Incremental Learning for Action Recognition in Videos0
FitCLIP: Refining Large-Scale Pretrained Image-Text Models for Zero-Shot Video Understanding TasksCode0
Movie Genre Classification by Language Augmentation and Shot SamplingCode0
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-TrainingCode3
FAR: Fourier Aerial Video RecognitionCode0
Continual Spatio-Temporal Graph Convolutional NetworksCode1
LocATe: End-to-end Localization of Actions in 3D with Transformers0
Point3D: tracking actions as moving points with 3D CNNs0
DirecFormer: A Directed Attention in Transformer Approach to Robust Action RecognitionCode1
Group Contextualization for Video RecognitionCode1
Gate-Shift-Fuse for Video Action RecognitionCode0
Know your sensORs -- A Modality Study For Surgical Action Classification0
Context-LSTM: a robust classifier for video detection on UCF1010
TFCNet: Temporal Fully Connected Networks for Static Unbiased Temporal Reasoning0
End-to-End Semantic Video Transformer for Zero-Shot Action RecognitionCode0
Data-Folding and Hyperspace Coding for Multi-Dimensonal Time-Series Data Imaging0
Part-level Action Parsing via a Pose-guided Coarse-to-Fine Framework0
Source-free Video Domain Adaptation by Learning Temporal Consistency for Action RecognitionCode1
Universal Prototype Transport for Zero-Shot Action Recognition and Localization0
Quantification of Occlusion Handling Capability of a 3D Human Pose Estimation FrameworkCode0
Behavior Recognition Based on the Integration of Multigranular Motion Features0
Learnable Irrelevant Modality Dropout for Multimodal Action Recognition on Modality-Specific Annotated Videos0
Domain Knowledge-Informed Self-Supervised Representations for Workout Form AssessmentCode1
Meta-path Analysis on Spatio-Temporal Graphs for Pedestrian Trajectory PredictionCode0
Continuous Human Action Recognition for Human-Machine Interaction: A Review0
Show:102550
← PrevPage 22 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4InternVideoTop-1 Accuracy77.2Unverified
5DejaVidTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3OmniVec3-fold Accuracy99.6Unverified
4VideoMAE V2-g3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
8OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
9PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
10Text4Vis3-fold Accuracy98.2Unverified