SOTAVerified

Time-Variant Variational Transfer for Value Functions

2020-05-26Unverified0· sign in to hype

Giuseppe Canonaco, Andrea Soprani, Manuel Roveri, Marcello Restelli

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In most of the transfer learning approaches to reinforcement learning (RL) the distribution over the tasks is assumed to be stationary. Therefore, the target and source tasks are i.i.d. samples of the same distribution. In the context of this work, we consider the problem of transferring value functions through a variational method when the distribution that generates the tasks is time-variant, proposing a solution that leverages this temporal structure inherent in the task generating process. Furthermore, by means of a finite-sample analysis, the previously mentioned solution is theoretically compared to its time-invariant version. Finally, we will provide an experimental evaluation of the proposed technique with three distinct temporal dynamics in three different RL environments.

Tasks

Reproductions