SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 151200 of 2759 papers

TitleStatusHype
Reuse and Diffuse: Iterative Denoising for Text-to-Video GenerationCode1
CDFSL-V: Cross-Domain Few-Shot Learning for VideosCode1
SOAR: Scene-debiasing Open-set Action RecognitionCode1
SiT-MLP: A Simple MLP with Point-wise Topology Feature Learning for Skeleton-based Action RecognitionCode1
B2C-AFM: Bi-Directional Co-Temporal and Cross-Spatial Attention Fusion Model for Human Action RecognitionCode1
Eventful Transformers: Leveraging Temporal Redundancy in Vision TransformersCode1
POCO: 3D Pose and Shape Estimation with ConfidenceCode1
Ske2Grid: Skeleton-to-Grid Representation Learning for Action RecognitionCode1
Masked Motion Predictors are Strong 3D Action Representation LearnersCode1
Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed GradientCode1
Zero-shot Skeleton-based Action Recognition via Mutual Information Estimation and MaximizationCode1
Human-centric Scene Understanding for 3D Large-scale ScenariosCode1
Actor-agnostic Multi-label Action Recognition with Multi-modal QueryCode1
What Can Simple Arithmetic Operations Do for Temporal Modeling?Code1
SkeletonMAE: Graph-based Masked Autoencoder for Skeleton Sequence Pre-trainingCode1
Integrating Human Parsing and Pose Network for Human Action RecognitionCode1
Multimodal Distillation for Egocentric Action RecognitionCode1
Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action RecognitionCode1
Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action RecognitionCode1
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the BackboneCode1
EgoAdapt: A multi-stream evaluation study of adaptation to real-world egocentric user videoCode1
Dynamic Perceiver for Efficient Visual RecognitionCode1
Multi-Granularity Hand Action DetectionCode1
Seeing the Pose in the Pixels: Learning Pose-Aware Representations in Vision TransformersCode1
Language Knowledge-Assisted Representation Learning for Skeleton-Based Action RecognitionCode1
Overcoming Topology Agnosticism: Enhancing Skeleton-Based Action Recognition through Redefined Skeletal Topology AwarenessCode1
Riemannian Multinomial Logistics Regression for SPD Neural NetworksCode1
Exploring Few-Shot Adaptation for Activity Recognition on Diverse DomainsCode1
M^2DAR: Multi-View Multi-Scale Driver Action Recognition with Vision TransformerCode1
MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless SensingCode1
Part Aware Contrastive Learning for Self-Supervised Action RecognitionCode1
TSGCNeXt: Dynamic-Static Multi-Graph Convolution for Efficient Skeleton-Based Action Recognition with Long-term Learning PotentialCode1
Implicit Temporal Modeling with Learnable Alignment for Video RecognitionCode1
Robust Cross-Modal Knowledge Distillation for Unconstrained VideosCode1
Vita-CLIP: Video and text adaptive CLIP via Multimodal PromptingCode1
MoLo: Motion-augmented Long-short Contrastive Learning for Few-shot Action RecognitionCode1
AutoLabel: CLIP-based framework for Open-set Video Domain AdaptationCode1
Dual Contrastive Prediction for Incomplete Multi-view Representation LearningCode1
STMT: A Spatial-Temporal Mesh Transformer for MoCap-Based Action RecognitionCode1
Streaming Video ModelCode1
TimeBalance: Temporally-Invariant and Temporally-Distinctive Video Representations for Semi-Supervised Action RecognitionCode1
Enlarging Instance-specific and Class-specific Information for Open-set Action RecognitionCode1
The effectiveness of MAE pre-pretraining for billion-scale pretrainingCode1
A Large-scale Study of Spatiotemporal Representation Learning with a New Benchmark on Action RecognitionCode1
Self-distillation for surgical action recognitionCode1
Dual-path Adaptation from Image to Video TransformersCode1
Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset and Baseline PerformancesCode1
Action knowledge for video captioning with graph neural networksCode1
MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language KnowledgeCode1
3DInAction: Understanding Human Actions in 3D Point CloudsCode1
Show:102550
← PrevPage 4 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4InternVideoTop-1 Accuracy77.2Unverified
5DejaVidTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3OmniVec3-fold Accuracy99.6Unverified
4VideoMAE V2-g3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
8OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
9PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
10Text4Vis3-fold Accuracy98.2Unverified