SOTAVerified

Edge-Compatible Reinforcement Learning for Recommendations

2021-12-10Unverified0· sign in to hype

James E. Kostas, Philip S. Thomas, Georgios Theocharous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Most reinforcement learning (RL) recommendation systems designed for edge computing must either synchronize during recommendation selection or depend on an unprincipled patchwork collection of algorithms. In this work, we build on asynchronous coagent policy gradient algorithms kostas2020asynchronous to propose a principled solution to this problem. The class of algorithms that we propose can be distributed over the internet and run asynchronously and in real-time. When a given edge fails to respond to a request for data with sufficient speed, this is not a problem; the algorithm is designed to function and learn in the edge setting, and network issues are part of this setting. The result is a principled, theoretically grounded RL algorithm designed to be distributed in and learn in this asynchronous environment. In this work, we describe this algorithm and a proposed class of architectures in detail, and demonstrate that they work well in practice in the asynchronous setting, even as the network quality degrades.

Tasks

Reproductions