SOTAVerified

Cold-Start Reinforcement Learning with Softmax Policy Gradient

2017-09-27NeurIPS 2017Code Available0· sign in to hype

Nan Ding, Radu Soricut

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Policy-gradient approaches to reinforcement learning have two common and undesirable overhead procedures, namely warm-start training and sample variance reduction. In this paper, we describe a reinforcement learning method based on a softmax value function that requires neither of these procedures. Our method combines the advantages of policy-gradient methods with the efficiency and simplicity of maximum-likelihood approaches. We apply this new cold-start reinforcement learning method in training sequence generation models for structured output prediction problems. Empirical evidence validates this method on automatic summarization and image captioning tasks.

Tasks

Reproductions