SOTAVerified

Multimodal Sentiment Analysis

Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.

( Image credit: ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection )

Papers

Showing 51100 of 202 papers

TitleStatusHype
Toward Robust Multimodal Learning using Multimodal Foundational Models0
WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge0
Contextual Augmented Global Contrast for Multimodal Intent Recognition0
MART: Masked Affective RepresenTation Learning via Masked Temporal Distribution Distillation0
Multimodal Sentiment Analysis with Missing Modality: A Knowledge-Transfer Approach0
Explainable Multimodal Sentiment Analysis on Bengali Memes0
PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis0
Multimodal Sentiment Analysis: Perceived vs Induced Sentiments0
Improving Multimodal Sentiment Analysis: Supervised Angular Margin-based Contrastive Learning for Enhanced Fusion Representation0
Unsupervised Graph Attention Autoencoder for Attributed Networks using K-means Loss0
Multi-label Emotion Analysis in Conversation via Multimodal Knowledge Distillation0
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment AnalysisCode1
Robust Multimodal Learning with Missing Modalities via Parameter-Efficient Adaptation0
Exchanging-based Multimodal Fusion with TransformerCode1
UniSA: Unified Generative Framework for Sentiment AnalysisCode1
Exploiting Diverse Feature for Multimodal Sentiment Analysis0
Multimodal Multi-loss Fusion Network for Sentiment AnalysisCode1
General Debiasing for Multimodal Sentiment AnalysisCode0
ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis0
Modality Influence in Multimodal Machine Learning0
Towards Arabic Multimodal Dataset for Sentiment AnalysisCode0
Syntax-aware Hybrid prompt model for Few-shot multi-modal sentiment analysis0
Denoising Bottleneck with Mutual Information Maximization for Video Multimodal FusionCode0
Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical Fusion for Multimodal Affect RecognitionCode0
Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal AlignmentCode0
Shared and Private Information Learning in Multimodal Sentiment Analysis with Deep Modal Alignment and Self-supervised Multi-Task Learning0
Multimodal Sentiment Analysis: A Survey0
Interpretable multimodal sentiment analysis based on textual modality descriptions by using large-scale language modelsCode0
The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked Emotions, Cross-Cultural Humour, and PersonalisationCode1
TextMI: Textualize Multimodal Information for Integrating Non-verbal Cues in Pre-trained Language Models0
Exploring Multimodal Sentiment Analysis via CBAM Attention and Double-layer BiLSTM Architecture0
MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation ModelsCode1
Curriculum Learning Meets Weakly Supervised Modality Correlation Learning0
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion RecognitionCode2
A Self-Adjusting Fusion Representation Learning Model for Unaligned Text-Audio Sequences0
Few-shot Multimodal Sentiment Analysis based on Multimodal Probabilistic Fusion PromptsCode1
MARLIN: Masked Autoencoder for facial video Representation LearnINgCode2
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal RepresentationsCode1
On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for Multimodal Sentiment AnalysisCode0
Improving the Modality Representation with Multi-View Contrastive Learning for Multimodal Sentiment Analysis0
Multimodal Contrastive Learning via Uni-Modal Coding and Cross-Modal Prediction for Multimodal Sentiment Analysis0
Transfer Learning with Joint Fine-Tuning for Multimodal Sentiment AnalysisCode1
Missing Modality meets Meta Sampling (M3S): An Efficient Universal Approach for Multimodal Sentiment Analysis with Missing Modality0
AMOA: Global Acoustic Feature Enhanced Modal-Order-Aware Network for Multimodal Sentiment Analysis0
Modeling Intra- and Inter-Modal Relations: Hierarchical Graph Contrastive Learning for Multimodal Sentiment Analysis0
Towards Exploiting Sticker for Multimodal Sentiment Analysis in Social Media: A New Dataset and BaselineCode1
TVLT: Textless Vision-Language TransformerCode1
Video-based Cross-modal Auxiliary Network for Multimodal Sentiment AnalysisCode0
Cross-Modality Gated Attention Fusion for Multimodal Sentiment Analysis0
Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent ModuleCode1
Show:102550
← PrevPage 2 of 5Next →

No leaderboard results yet.