SOTAVerified

Multimodal Recommendation

The multimodal recommendation task involves developing systems that leverage and integrate multiple types of data—such as text, images, audio, and user interactions—to predict and suggest items that align with a user's preferences. Unlike traditional recommendation approaches that rely on a single data modality, multimodal recommendation harnesses the diverse information from various sources to create richer and more nuanced representations of both users and items. This integration enables the system to understand and capture complex relationships and attributes across different data types, thereby enhancing the accuracy and relevance of the recommendations. The primary goal is to provide personalized suggestions by effectively merging and processing heterogeneous data to better match users with items they are likely to engage with or find valuable.

Papers

Showing 1120 of 59 papers

TitleStatusHype
Generating with Fairness: A Modality-Diffused Counterfactual Framework for Incomplete Multimodal RecommendationsCode1
Dynamic Multimodal Fusion via Meta-Learning Towards Micro-Video RecommendationCode0
Don't Lose Yourself: Boosting Multimodal Recommendation via Reducing Node-neighbor Discrepancy in Graph Convolutional Network0
Spectrum-based Modality Representation Fusion Graph Convolutional Network for Multimodal RecommendationCode1
Modality-Independent Graph Neural Networks with Global Transformers for Multimodal RecommendationCode2
Beyond Graph Convolution: Multimodal Recommendation with Topology-aware MLPsCode1
STAIR: Manipulating Collaborative and Multimodal Information for E-Commerce RecommendationCode0
Multimodal Graph Neural Network for Recommendation with Dynamic De-redundancy and Modality-Guided Feature De-noisy0
Learning ID-free Item Representation with Token Crossing for Multimodal Recommendation0
Dynamic Fusion Strategies for Federated Multimodal Recommendations0
Show:102550
← PrevPage 2 of 6Next →

No leaderboard results yet.