SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 13011350 of 2759 papers

TitleStatusHype
EAN: Event Adaptive Network for Enhanced Action RecognitionCode1
Evidential Deep Learning for Open Set Action RecognitionCode1
UNIK: A Unified Framework for Real-world Skeleton-based Action RecognitionCode1
LSFB-CONT and LSFB-ISOL: Two New Datasets for Vision-Based Sign Language RecognitionCode0
Federated Action Recognition on Heterogeneous Embedded Devices0
Group Activity Recognition Using Joint Learning of Individual Action Recognition and People GroupingCode0
Data Efficient Video Transformer for Violence DetectionCode1
STAR: Sparse Transformer-based Action RecognitionCode1
Training for temporal sparsity in deep neural networks, application in video processing0
Delta Sampling R-BERT for limited data and low-light action recognition0
Aligning Correlation Information for Domain Adaptation in Action Recognition0
Interpretable Deep Feature Propagation for Early Action Recognition0
Review of Video Predictive Understanding: Early Action Recognition and Future Action Prediction0
TA2N: Two-Stage Action Alignment Network for Few-shot Action RecognitionCode1
Video 3D Sampling for Self-supervised Representation Learning0
Federated Learning for Multi-Center Imaging Diagnostics: A Study in Cardiovascular DiseaseCode1
PoliTO-IIT Submission to the EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition0
Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action RecognitionCode1
VideoLightFormer: Lightweight Action Recognition using Transformers0
Attention Bottlenecks for Multimodal FusionCode0
Word-level Sign Language Recognition with Multi-stream Neural Networks Focusing on Local Regions and Skeletal Information0
Long-Short Temporal Modeling for Efficient Action Recognition0
Constructing Stronger and Faster Baselines for Skeleton-based Action RecognitionCode1
Feature Combination Meets Attention: Baidu Soccer Embeddings and Transformer based Temporal DetectionCode1
Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action LocalizationCode1
Can An Image Classifier Suffice For Action Recognition?Code1
Video Swin TransformerCode2
Vision-based Behavioral Recognition of Novelty Preference in PigsCode1
Team PyKale (xy9) Submission to the EPIC-Kitchens 2021 Unsupervised Domain Adaptation Challenge for Action Recognition0
Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily Long Videos of SeizuresCode0
SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild0
Towards Long-Form Video UnderstandingCode1
VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive LearningCode1
Learning Graphs for Knowledge Transfer With Limited Labels0
Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud VideosCode1
Graph-Based High-Order Relation Modeling for Long-Term Action Recognition0
Spatio-temporal Contrastive Domain Adaptation for Action Recognition0
Self-supervised Video Representation Learning with Cross-Stream Prototypical ContrastingCode1
EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report0
MoVi: A large multi-purpose human motion and video datasetCode1
MaCLR: Motion-aware Contrastive Learning of Representations for VideosCode0
BABEL: Bodies, Action and Behavior with English LabelsCode1
Long-Short Temporal Contrastive Learning of Video Transformers0
Gradient Forward-Propagation for Large-Scale Temporal Video Modelling0
Isolated Sign Recognition from RGB Video using Pose Flow and Self-AttentionCode1
Space-time Mixing Attention for Video TransformerCode1
Keeping Your Eye on the Ball: Trajectory Attention in Video TransformersCode1
Towards Training Stronger Video Vision Transformers for EPIC-KITCHENS-100 Action Recognition0
Technical Report: Temporal Aggregate RepresentationsCode1
Transformed ROIs for Capturing Visual Transformations in Videos0
Show:102550
← PrevPage 27 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4DejaVidTop-1 Accuracy77.2Unverified
5InternVideoTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3VideoMAE V2-g3-fold Accuracy99.6Unverified
4OmniVec3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
8PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
9ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
10LGD-3D Two-stream3-fold Accuracy98.2Unverified