SOTAVerified

Learning Positional Attention for Sequential Recommendation

2024-07-03Code Available0· sign in to hype

Fan Luo, Haibo He, Juan Zhang, Shenghui Xu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Self-attention-based networks have achieved remarkable performance in sequential recommendation tasks. A crucial component of these models is positional encoding. In this study, we delve into the learned positional embedding, demonstrating that it often captures the distance between tokens. Building on this insight, we introduce novel attention models that directly learn positional relations. Extensive experiments reveal that our proposed models, PARec and FPARec outperform previous self-attention-based approaches. The code can be found here: https://github.com/NetEase-Media/FPARec.

Tasks

Reproductions