SOTAVerified

Multimodal Recommendation

The multimodal recommendation task involves developing systems that leverage and integrate multiple types of data—such as text, images, audio, and user interactions—to predict and suggest items that align with a user's preferences. Unlike traditional recommendation approaches that rely on a single data modality, multimodal recommendation harnesses the diverse information from various sources to create richer and more nuanced representations of both users and items. This integration enables the system to understand and capture complex relationships and attributes across different data types, thereby enhancing the accuracy and relevance of the recommendations. The primary goal is to provide personalized suggestions by effectively merging and processing heterogeneous data to better match users with items they are likely to engage with or find valuable.

Papers

Showing 2650 of 59 papers

TitleStatusHype
A Survey on Large Language Models in Multimodal Recommender Systems0
ATFLRec: A Multimodal Recommender System with Audio-Text Fusion and Low-Rank Adaptation via Instruction-Tuned Large Language Model0
Attention-guided Multi-step Fusion: A Hierarchical Fusion Network for Multimodal Recommendation0
Attribute-driven Disentangled Representation Learning for Multimodal Recommendation0
Bridging Domain Gaps between Pretrained Multimodal Models and Recommendations0
Dealing with Missing Modalities in Multimodal Recommendation: a Feature Propagation-based Approach0
Don't Lose Yourself: Boosting Multimodal Recommendation via Reducing Node-neighbor Discrepancy in Graph Convolutional Network0
DREAM: A Dual Representation Learning Model for Multimodal Recommendation0
HistLLM: A Unified Framework for LLM-Based Multimodal Recommendation with User History Encoding and Compression0
ID Embedding as Subtle Features of Content and Structure for Multimodal Recommendation0
Knowledge Soft Integration for Multimodal Recommendation0
Learning ID-free Item Representation with Token Crossing for Multimodal Recommendation0
MDVT: Enhancing Multimodal Recommendation with Model-Agnostic Multimodal-Driven Virtual Triplets0
MMGRec: Multimodal Generative Recommendation with Transformer Model0
Multi-Modal Hypergraph Enhanced LLM Learning for Recommendation0
Multimodal Point-of-Interest Recommendation0
Multimodal Pretraining and Generation for Recommendation: A Tutorial0
Multimodal Recommendation Dialog with Subjective Preference: A New Challenge and Benchmark0
Navigating the Future of Federated Recommendation Systems with Foundation Models0
Dynamic Fusion Strategies for Federated Multimodal Recommendations0
Rec-GPT4V: Multimodal Recommendation with Large Vision-Language Models0
SynerGraph: An Integrated Graph Convolution Network for Multimodal Recommendation0
Training-Free Graph Filtering via Multimodal Feature Refinement for Extremely Fast Multimodal Recommendation0
Modality Reliability Guided Multimodal Recommendation0
Multimodal Graph Neural Network for Recommendation with Dynamic De-redundancy and Modality-Guided Feature De-noisy0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.