SOTAVerified

RecGPT: Generative Pre-training for Text-based Recommendation

2024-05-21Code Available1· sign in to hype

Hoang Ngo, Dat Quoc Nguyen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public "huggingface" links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT

Tasks

Reproductions