SOTAVerified

A Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation

2019-12-01NeurIPS 2019Code Available0· sign in to hype

Xueying Bai, Jian Guan, Hongning Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Reinforcement learning is effective in optimizing policies for recommender systems. Current solutions mostly focus on model-free approaches, which require frequent interactions with a real environment, and thus are expensive in model learning. Offline evaluation methods, such as importance sampling, can alleviate such limitations, but usually request a large amount of logged data and do not work well when the action space is large. In this work, we propose a model-based reinforcement learning solution which models the user-agent interaction for offline policy learning via a generative adversarial network. To reduce bias in the learnt policy, we use the discriminator to evaluate the quality of generated sequences and rescale the generated rewards. Our theoretical analysis and empirical evaluations demonstrate the effectiveness of our solution in identifying patterns from given offline data and learning policies based on the offline and generated data.

Tasks

Reproductions