SOTAVerified

Zero-Shot Recommendation as Language Modeling

2021-12-08Code Available1· sign in to hype

Damien Sileo, Wout Vossen, Robbe Raymaekers

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recommendation is the task of ranking items (e.g. movies or products) according to individual user needs. Current systems rely on collaborative filtering and content-based techniques, which both require structured training data. We propose a framework for recommendation with off-the-shelf pretrained language models (LM) that only used unstructured text corpora as training data. If a user u liked Matrix and Inception, we construct a textual prompt, e.g. "Movies like Matrix, Inception, <m>" to estimate the affinity between u and m with LM likelihood. We motivate our idea with a corpus analysis, evaluate several prompt structures, and we compare LM-based recommendation with standard matrix factorization trained on different data regimes. The code for our experiments is publicly available (https://colab.research.google.com/drive/1f1mlZ-FGaLGdo5rPzxf3vemKllbh2esT?usp=sharing).

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
language-modeling-recommendationGPT21:1 Accuracy48.8Unverified

Reproductions