SOTAVerified

Multimodal Recommendation

The multimodal recommendation task involves developing systems that leverage and integrate multiple types of data—such as text, images, audio, and user interactions—to predict and suggest items that align with a user's preferences. Unlike traditional recommendation approaches that rely on a single data modality, multimodal recommendation harnesses the diverse information from various sources to create richer and more nuanced representations of both users and items. This integration enables the system to understand and capture complex relationships and attributes across different data types, thereby enhancing the accuracy and relevance of the recommendations. The primary goal is to provide personalized suggestions by effectively merging and processing heterogeneous data to better match users with items they are likely to engage with or find valuable.

Papers

Showing 110 of 59 papers

TitleStatusHype
A Comprehensive Survey on Multimodal Recommender Systems: Taxonomy, Evaluation, and Future DirectionsCode2
Modality-Independent Graph Neural Networks with Global Transformers for Multimodal RecommendationCode2
End-to-end training of Multimodal Model and ranking ModelCode1
Disentangled Graph Variational Auto-Encoder for Multimodal Recommendation with InterpretabilityCode1
AlignRec: Aligning and Training in Multimodal RecommendationsCode1
Ducho: A Unified Framework for the Extraction of Multimodal Features in RecommendationCode1
Beyond Graph Convolution: Multimodal Recommendation with Topology-aware MLPsCode1
COHESION: Composite Graph Convolutional Network with Dual-Stage Fusion for Multimodal RecommendationCode1
A Tale of Two Graphs: Freezing and Denoising Graph Structures for Multimodal RecommendationCode1
Ducho 2.0: Towards a More Up-to-Date Unified Framework for the Extraction of Multimodal Features in RecommendationCode1
Show:102550
← PrevPage 1 of 6Next →

No leaderboard results yet.