SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 151200 of 2759 papers

TitleStatusHype
Challenges in Video-Based Infant Action Recognition: A Critical Examination of the State of the ArtCode1
CIDEr: Consensus-based Image Description EvaluationCode1
Actor-Context-Actor Relation Network for Spatio-Temporal Action LocalizationCode1
CAST: Cross-Attention in Space and Time for Video Action RecognitionCode1
CDFSL-V: Cross-Domain Few-Shot Learning for VideosCode1
A Large-scale Study of Spatiotemporal Representation Learning with a New Benchmark on Action RecognitionCode1
Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action RecognitionCode1
CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action RecognitionCode1
A Multigrid Method for Efficiently Training Video ModelsCode1
CAKES: Channel-wise Automatic KErnel Shrinking for Efficient 3D NetworksCode1
Collaborating Domain-shared and Target-specific Feature Clustering for Cross-domain 3D Action RecognitionCode1
Complex Sequential Understanding through the Awareness of Spatial and Temporal ConceptsCode1
A Large-Scale Study on Video Action Dataset CondensationCode1
3DV: 3D Dynamic Voxel for Action Recognition in Depth VideoCode1
Action Genome: Actions as Composition of Spatio-temporal Scene GraphsCode1
Volterra Neural Networks (VNNs)Code1
3DYoga90: A Hierarchical Video Dataset for Yoga Pose UnderstandingCode1
Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action RecognitionCode1
ConvNet Architecture Search for Spatiotemporal Feature LearningCode1
Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical AggregationCode1
CT-Net: Channel Tensorization Network for Video ClassificationCode1
A Deeper Dive Into What Deep Spatiotemporal Networks Encode: Quantifying Static vs. Dynamic InformationCode1
D^2ST-Adapter: Disentangled-and-Deformable Spatio-Temporal Adapter for Few-shot Action RecognitionCode1
A Dense-Sparse Complementary Network for Human Action Recognition based on RGB and Skeleton ModalitiesCode1
Action knowledge for video captioning with graph neural networksCode1
A Lie Group Approach to Riemannian Batch NormalizationCode1
Deep Analysis of CNN-based Spatio-temporal Representations for Action RecognitionCode1
Full-Body Articulated Human-Object InteractionCode1
CLIP-guided Prototype Modulating for Few-shot Action RecognitionCode1
BST: Badminton Stroke-type Transformer for Skeleton-based Action Recognition in Racket SportsCode1
DEVIAS: Learning Disentangled Video Representations of Action and SceneCode1
DirecFormer: A Directed Attention in Transformer Approach to Robust Action RecognitionCode1
Disentangling and Unifying Graph Convolutions for Skeleton-Based Action RecognitionCode1
Diverse Temporal Aggregation and Depthwise Spatiotemporal Factorization for Efficient Video ClassificationCode1
Domain Knowledge-Informed Self-Supervised Representations for Workout Form AssessmentCode1
ActionCLIP: A New Paradigm for Video Action RecognitionCode1
ACTION-Net: Multipath Excitation for Action RecognitionCode1
A Local-to-Global Approach to Multi-modal Movie Scene SegmentationCode1
Dual-path Adaptation from Image to Video TransformersCode1
Building a Multi-modal Spatiotemporal Expert for Zero-shot Action Recognition with CLIPCode1
3D CNNs with Adaptive Temporal Feature ResolutionsCode1
An Action Is Worth Multiple Words: Handling Ambiguity in Action RecognitionCode1
EgoAdapt: A multi-stream evaluation study of adaptation to real-world egocentric user videoCode1
Bridging Video-text Retrieval with Multiple Choice QuestionsCode1
EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery VideosCode1
BMN: Boundary-Matching Network for Temporal Action Proposal GenerationCode1
Elaborative Rehearsal for Zero-shot Action RecognitionCode1
Encoding Surgical Videos as Latent Spatiotemporal Graphs for Object and Anatomy-Driven ReasoningCode1
Enlarging Instance-specific and Class-specific Information for Open-set Action RecognitionCode1
Bringing Online Egocentric Action Recognition into the wildCode1
Show:102550
← PrevPage 4 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4InternVideoTop-1 Accuracy77.2Unverified
5DejaVidTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3OmniVec3-fold Accuracy99.6Unverified
4VideoMAE V2-g3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
8OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
9PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
10Text4Vis3-fold Accuracy98.2Unverified