SOTAVerified

Multimodal Sentiment Analysis

Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.

( Image credit: ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection )

Papers

Showing 5175 of 202 papers

TitleStatusHype
Toward Robust Multimodal Learning using Multimodal Foundational Models0
WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge0
Contextual Augmented Global Contrast for Multimodal Intent Recognition0
MART: Masked Affective RepresenTation Learning via Masked Temporal Distribution Distillation0
Multimodal Sentiment Analysis with Missing Modality: A Knowledge-Transfer Approach0
Explainable Multimodal Sentiment Analysis on Bengali Memes0
PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis0
Multimodal Sentiment Analysis: Perceived vs Induced Sentiments0
Improving Multimodal Sentiment Analysis: Supervised Angular Margin-based Contrastive Learning for Enhanced Fusion Representation0
Unsupervised Graph Attention Autoencoder for Attributed Networks using K-means Loss0
Multi-label Emotion Analysis in Conversation via Multimodal Knowledge Distillation0
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment AnalysisCode1
Robust Multimodal Learning with Missing Modalities via Parameter-Efficient Adaptation0
Exchanging-based Multimodal Fusion with TransformerCode1
UniSA: Unified Generative Framework for Sentiment AnalysisCode1
Exploiting Diverse Feature for Multimodal Sentiment Analysis0
Multimodal Multi-loss Fusion Network for Sentiment AnalysisCode1
General Debiasing for Multimodal Sentiment AnalysisCode0
ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis0
Modality Influence in Multimodal Machine Learning0
Towards Arabic Multimodal Dataset for Sentiment AnalysisCode0
Syntax-aware Hybrid prompt model for Few-shot multi-modal sentiment analysis0
Denoising Bottleneck with Mutual Information Maximization for Video Multimodal FusionCode0
Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical Fusion for Multimodal Affect RecognitionCode0
Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal AlignmentCode0
Show:102550
← PrevPage 3 of 9Next →

No leaderboard results yet.