SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 351400 of 2759 papers

TitleStatusHype
Feature Combination Meets Attention: Baidu Soccer Embeddings and Transformer based Temporal DetectionCode1
Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action LocalizationCode1
Can An Image Classifier Suffice For Action Recognition?Code1
Vision-based Behavioral Recognition of Novelty Preference in PigsCode1
Towards Long-Form Video UnderstandingCode1
VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive LearningCode1
Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud VideosCode1
Self-supervised Video Representation Learning with Cross-Stream Prototypical ContrastingCode1
MoVi: A large multi-purpose human motion and video datasetCode1
BABEL: Bodies, Action and Behavior with English LabelsCode1
Isolated Sign Recognition from RGB Video using Pose Flow and Self-AttentionCode1
Space-time Mixing Attention for Video TransformerCode1
Keeping Your Eye on the Ball: Trajectory Attention in Video TransformersCode1
Technical Report: Temporal Aggregate RepresentationsCode1
RegionViT: Regional-to-Local Attention for Vision TransformersCode1
CT-Net: Channel Tensorization Network for Video ClassificationCode1
DSANet: Dynamic Segment Aggregation Network for Video-Level Representation LearningCode1
Sharing Pain: Using Pain Domain Transfer for Video Recognition of Low Grade Orthopedic Pain in HorsesCode1
Multimodal Fusion via Teacher-Student Network for Indoor Action RecognitionCode1
VPN++: Rethinking Video-Pose embeddings for understanding Activities of Daily LivingCode1
MutualNet: Adaptive ConvNet via Mutual Learning from Different Model ConfigurationsCode1
Home Action Genome: Cooperative Compositional Action UnderstandingCode1
Unsupervised Visual Representation Learning by Tracking Patches in VideoCode1
Fusing Higher-order Features in Graph Neural Networks for Skeleton-based Action RecognitionCode1
CoCon: Cooperative-Contrastive LearningCode1
3D Human Action Representation Learning via Cross-View Consistency PursuitCode1
Revisiting Skeleton-based Action RecognitionCode1
Multiscale Vision TransformersCode1
VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and TextCode1
ImageNet-21K Pretraining for the MassesCode1
MGSampler: An Explainable Sampling Strategy for Video Action RecognitionCode1
Higher Order Recurrent Space-Time Transformer for Video Action PredictionCode1
Action-Conditioned 3D Human Motion Synthesis with Transformer VAECode1
UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial VehiclesCode1
Learning Representational Invariances for Data-Efficient Action RecognitionCode1
Busy-Quiet Video Disentangling for Video ClassificationCode1
No frame left behind: Full Video Action RecognitionCode1
ViViT: A Video Vision TransformerCode1
An Image is Worth 16x16 Words, What is a Video Worth?Code1
MoViNets: Mobile Video Networks for Efficient Video RecognitionCode1
ACTION-Net: Multipath Excitation for Action RecognitionCode1
VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial ExamplesCode1
BASAR:Black-box Attack on Skeletal Action RecognitionCode1
Understanding the Robustness of Skeleton-based Action Recognition under Adversarial AttackCode1
VIPriors 1: Visual Inductive Priors for Data-Efficient Deep Learning ChallengesCode1
A Body Part Embedding Model With Datasets for Measuring 2D Human Motion SimilarityCode1
One-shot action recognition in challenging therapy scenariosCode1
Learning Self-Similarity in Space and Time as Generalized Motion for Video Action RecognitionCode1
Negative Data AugmentationCode1
Semi-Supervised Action Recognition with Temporal Contrastive LearningCode1
Show:102550
← PrevPage 8 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4DejaVidTop-1 Accuracy77.2Unverified
5InternVideoTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3VideoMAE V2-g3-fold Accuracy99.6Unverified
4OmniVec3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
8PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
9ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
10LGD-3D Two-stream3-fold Accuracy98.2Unverified