SOTAVerified

Multimodal Recommendation

The multimodal recommendation task involves developing systems that leverage and integrate multiple types of data—such as text, images, audio, and user interactions—to predict and suggest items that align with a user's preferences. Unlike traditional recommendation approaches that rely on a single data modality, multimodal recommendation harnesses the diverse information from various sources to create richer and more nuanced representations of both users and items. This integration enables the system to understand and capture complex relationships and attributes across different data types, thereby enhancing the accuracy and relevance of the recommendations. The primary goal is to provide personalized suggestions by effectively merging and processing heterogeneous data to better match users with items they are likely to engage with or find valuable.

Papers

Showing 5159 of 59 papers

TitleStatusHype
MMRec: Simplifying Multimodal RecommendationCode0
MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-videoCode0
Dynamic Multimodal Fusion via Meta-Learning Towards Micro-Video RecommendationCode0
STAIR: Manipulating Collaborative and Multimodal Information for E-Commerce RecommendationCode0
Collaborative Filtering Meets Spectrum Shift: Connecting User-Item Interaction with Graph-Structured Side InformationCode0
Ducho meets Elliot: Large-scale Benchmarks for Multimodal RecommendationCode0
A Multimodal Single-Branch Embedding Network for Recommendation in Cold-Start and Missing Modality ScenariosCode0
Semantic-Guided Feature Distillation for Multimodal RecommendationCode0
Do We Really Need to Drop Items with Missing Modalities in Multimodal Recommendation?Code0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.