SOTAVerified

Multimodal Recommendation

The multimodal recommendation task involves developing systems that leverage and integrate multiple types of data—such as text, images, audio, and user interactions—to predict and suggest items that align with a user's preferences. Unlike traditional recommendation approaches that rely on a single data modality, multimodal recommendation harnesses the diverse information from various sources to create richer and more nuanced representations of both users and items. This integration enables the system to understand and capture complex relationships and attributes across different data types, thereby enhancing the accuracy and relevance of the recommendations. The primary goal is to provide personalized suggestions by effectively merging and processing heterogeneous data to better match users with items they are likely to engage with or find valuable.

Papers

Showing 2130 of 59 papers

TitleStatusHype
Enhancing Dyadic Relations with Homogeneous Graphs for Multimodal RecommendationCode1
Generating with Fairness: A Modality-Diffused Counterfactual Framework for Incomplete Multimodal RecommendationsCode1
Causality-Inspired Fair Representation Learning for Multimodal RecommendationCode1
GUME: Graphs and User Modalities Enhancement for Long-Tail Multimodal RecommendationCode1
X-Reflect: Cross-Reflection Prompting for Multimodal Recommendation0
A Survey on Large Language Models in Multimodal Recommender Systems0
ATFLRec: A Multimodal Recommender System with Audio-Text Fusion and Low-Rank Adaptation via Instruction-Tuned Large Language Model0
Attention-guided Multi-step Fusion: A Hierarchical Fusion Network for Multimodal Recommendation0
Attribute-driven Disentangled Representation Learning for Multimodal Recommendation0
Bridging Domain Gaps between Pretrained Multimodal Models and Recommendations0
Show:102550
← PrevPage 3 of 6Next →

No leaderboard results yet.