SOTAVerified

Multimodal Sentiment Analysis

Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.

( Image credit: ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection )

Papers

Showing 76100 of 202 papers

TitleStatusHype
Shared and Private Information Learning in Multimodal Sentiment Analysis with Deep Modal Alignment and Self-supervised Multi-Task Learning0
Multimodal Sentiment Analysis: A Survey0
Interpretable multimodal sentiment analysis based on textual modality descriptions by using large-scale language modelsCode0
The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked Emotions, Cross-Cultural Humour, and PersonalisationCode1
TextMI: Textualize Multimodal Information for Integrating Non-verbal Cues in Pre-trained Language Models0
Exploring Multimodal Sentiment Analysis via CBAM Attention and Double-layer BiLSTM Architecture0
MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation ModelsCode1
Curriculum Learning Meets Weakly Supervised Modality Correlation Learning0
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion RecognitionCode2
A Self-Adjusting Fusion Representation Learning Model for Unaligned Text-Audio Sequences0
Few-shot Multimodal Sentiment Analysis based on Multimodal Probabilistic Fusion PromptsCode1
MARLIN: Masked Autoencoder for facial video Representation LearnINgCode2
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal RepresentationsCode1
On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for Multimodal Sentiment AnalysisCode0
Improving the Modality Representation with Multi-View Contrastive Learning for Multimodal Sentiment Analysis0
Multimodal Contrastive Learning via Uni-Modal Coding and Cross-Modal Prediction for Multimodal Sentiment Analysis0
Transfer Learning with Joint Fine-Tuning for Multimodal Sentiment AnalysisCode1
Missing Modality meets Meta Sampling (M3S): An Efficient Universal Approach for Multimodal Sentiment Analysis with Missing Modality0
AMOA: Global Acoustic Feature Enhanced Modal-Order-Aware Network for Multimodal Sentiment Analysis0
Modeling Intra- and Inter-Modal Relations: Hierarchical Graph Contrastive Learning for Multimodal Sentiment Analysis0
Towards Exploiting Sticker for Multimodal Sentiment Analysis in Social Media: A New Dataset and BaselineCode1
TVLT: Textless Vision-Language TransformerCode1
Video-based Cross-modal Auxiliary Network for Multimodal Sentiment AnalysisCode0
Cross-Modality Gated Attention Fusion for Multimodal Sentiment Analysis0
Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent ModuleCode1
Show:102550
← PrevPage 4 of 9Next →

No leaderboard results yet.