SOTAVerified

Generative Adversarial Self-Imitation Learning

2018-12-03ICLR 2019Unverified0· sign in to hype

Yijie Guo, Junhyuk Oh, Satinder Singh, Honglak Lee

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper explores a simple regularizer for reinforcement learning by proposing Generative Adversarial Self-Imitation Learning (GASIL), which encourages the agent to imitate past good trajectories via generative adversarial imitation learning framework. Instead of directly maximizing rewards, GASIL focuses on reproducing past good trajectories, which can potentially make long-term credit assignment easier when rewards are sparse and delayed. GASIL can be easily combined with any policy gradient objective by using GASIL as a learned shaped reward function. Our experimental results show that GASIL improves the performance of proximal policy optimization on 2D Point Mass and MuJoCo environments with delayed reward and stochastic dynamics.

Tasks

Reproductions