Embarrassingly Shallow Autoencoders for Sparse Data
2019-05-08Code Available1· sign in to hype
Harald Steck
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/PreferredAI/cornactf★ 1,024
- github.com/AmazingDD/daisyRecpytorch★ 550
- github.com/recsys-benchmark/daisyrec-v2.0pytorch★ 65
- github.com/glami/sansanone★ 45
- github.com/franckjay/TorchEASEpytorch★ 0
- github.com/MindSpore-scientific/code-10/tree/main/shallow-rnnsmindspore★ 0
- github.com/Darel13712/ease_recnone★ 0
- github.com/jvbalen/autoencoders_cfpytorch★ 0
- github.com/AhmadRK94/NeuEASEpytorch★ 0
Abstract
Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Million Song Dataset | EASE | nDCG@100 | 0.39 | — | Unverified |
| MovieLens 20M | EASE | Recall@20 | 0.39 | — | Unverified |
| Netflix | EASE | nDCG@100 | 0.39 | — | Unverified |