SOTAVerified

Action Recognition

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Papers

Showing 351400 of 2759 papers

TitleStatusHype
Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action RecognitionCode1
AutoLabel: CLIP-based framework for Open-set Video Domain AdaptationCode1
MeteorNet: Deep Learning on Dynamic 3D Point Cloud SequencesCode1
MGSampler: An Explainable Sampling Strategy for Video Action RecognitionCode1
MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless SensingCode1
AutoVideo: An Automated Video Action Recognition SystemCode1
AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual ActionsCode1
Holistic Interaction Transformer Network for Action DetectionCode1
Full-Body Articulated Human-Object InteractionCode1
EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal TokensCode1
AViD Dataset: Anonymized Videos from Diverse CountriesCode1
Motion meets Attention: Video Motion PromptsCode1
ViNet: Pushing the limits of Visual Modality for Audio-Visual Saliency PredictionCode1
EgoAdapt: A multi-stream evaluation study of adaptation to real-world egocentric user videoCode1
CDFSL-V: Cross-Domain Few-Shot Learning for VideosCode1
CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action RecognitionCode1
B2C-AFM: Bi-Directional Co-Temporal and Cross-Spatial Attention Fusion Model for Human Action RecognitionCode1
BABEL: Bodies, Action and Behavior with English LabelsCode1
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the BackboneCode1
Large Scale Holistic Video UnderstandingCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
End-to-End Streaming Video Temporal Action Segmentation with Reinforce LearningCode1
Encoding Surgical Videos as Latent Spatiotemporal Graphs for Object and Anatomy-Driven ReasoningCode1
End-to-End Learning of Visual Representations from Uncurated Instructional VideosCode1
ACTION-Net: Multipath Excitation for Action RecognitionCode1
EPAM-Net: An Efficient Pose-driven Attention-guided Multimodal Network for Video Action RecognitionCode1
Multi-Modality Co-Learning for Efficient Skeleton-based Action RecognitionCode1
BASAR:Black-box Attack on Skeletal Action RecognitionCode1
Multiscale Vision TransformersCode1
Multi-Semantic Fusion Model for Generalized Zero-Shot Skeleton-Based Action RecognitionCode1
Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the MotionCode1
Enlarging Instance-specific and Class-specific Information for Open-set Action RecognitionCode1
3D CNNs with Adaptive Temporal Feature ResolutionsCode1
Benchmarking Micro-action Recognition: Dataset, Methods, and ApplicationsCode1
Spiking Neural Networks for event-based action recognition: A new task to understand their advantageCode1
CAKES: Channel-wise Automatic KErnel Shrinking for Efficient 3D NetworksCode1
CAST: Cross-Attention in Space and Time for Video Action RecognitionCode1
BEVT: BERT Pretraining of Video TransformersCode1
EventRPG: Event Data Augmentation with Relevance Propagation GuidanceCode1
Event Stream based Human Action Recognition: A High-Definition Benchmark Dataset and AlgorithmsCode1
Anonymization for Skeleton Action RecognitionCode1
ExACT: Language-guided Conceptual Reasoning and Uncertainty Estimation for Event-based Action Recognition and MoreCode1
Building an Open-Vocabulary Video CLIP Model with Better Architectures, Optimization and DataCode1
NTU-X: An Enhanced Large-scale Dataset for Improving Pose-based Recognition of Subtle Human ActionsCode1
C2C: Component-to-Composition Learning for Zero-Shot Compositional Action RecognitionCode1
CIAGAN: Conditional Identity Anonymization Generative Adversarial NetworksCode1
Home Action Genome: Cooperative Compositional Action UnderstandingCode1
One-shot action recognition in challenging therapy scenariosCode1
Exploring Few-Shot Adaptation for Activity Recognition on Diverse DomainsCode1
Bridging Video-text Retrieval with Multiple Choice QuestionsCode1
Show:102550
← PrevPage 8 of 56Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MViTv2-B (IN-21K + Kinetics400 pretrain)Top-5 Accuracy93.4Unverified
2RSANet-R50 (8+16 frames, ImageNet pretrained, 2 clips)Top-5 Accuracy91.1Unverified
3MVD (Kinetics400 pretrain, ViT-H, 16 frame)Top-1 Accuracy77.3Unverified
4DejaVidTop-1 Accuracy77.2Unverified
5InternVideoTop-1 Accuracy77.2Unverified
6InternVideo2-1BTop-1 Accuracy77.1Unverified
7VideoMAE V2-gTop-1 Accuracy77Unverified
8MVD (Kinetics400 pretrain, ViT-L, 16 frame)Top-1 Accuracy76.7Unverified
9Hiera-L (no extra data)Top-1 Accuracy76.5Unverified
10TubeViT-LTop-1 Accuracy76.1Unverified
#ModelMetricClaimedVerifiedStatus
1FTP-UniFormerV2-L/143-fold Accuracy99.7Unverified
2OmniVec23-fold Accuracy99.6Unverified
3VideoMAE V2-g3-fold Accuracy99.6Unverified
4OmniVec3-fold Accuracy99.6Unverified
5BIKE3-fold Accuracy98.8Unverified
6SMART3-fold Accuracy98.64Unverified
7OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)3-fold Accuracy98.6Unverified
8PERF-Net (multi-distilled S3D)3-fold Accuracy98.6Unverified
9ZeroI2V ViT-L/143-fold Accuracy98.6Unverified
10LGD-3D Two-stream3-fold Accuracy98.2Unverified