SOTAVerified

Multimodal Activity Recognition

Papers

Showing 110 of 30 papers

TitleStatusHype
Temporal Segment Networks: Towards Good Practices for Deep Action RecognitionCode2
OPERAnet: A Multimodal Activity Recognition Dataset Acquired from Radio Frequency and Vision-based SensorsCode1
Fusion-GCN: Multimodal Action Recognition using Graph Convolutional NetworksCode1
Distilling Audio-Visual Knowledge by Compositional Contrastive LearningCode1
Gimme Signals: Discriminative signal encoding for multimodal activity recognitionCode1
Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action RecognitionCode1
MuMu: Cooperative Multitask Learning-based Guided Multimodal Fusion0
Multi-GAT: A Graphical Attention-based Hierarchical Multimodal Representation Learning Approach for Human Activity Recognition0
MMAct: A Large-Scale Dataset for Cross Modal Human Action Understanding0
Activity recognition using ST-GCN with 3D motion data0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.