SOTAVerified

An Empirical Comparison on Imitation Learning and Reinforcement Learning for Paraphrase Generation

2019-08-28IJCNLP 2019Code Available0· sign in to hype

Wanyu Du, Yangfeng Ji

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generating paraphrases from given sentences involves decoding words step by step from a large vocabulary. To learn a decoder, supervised learning which maximizes the likelihood of tokens always suffers from the exposure bias. Although both reinforcement learning (RL) and imitation learning (IL) have been widely used to alleviate the bias, the lack of direct comparison leads to only a partial image on their benefits. In this work, we present an empirical study on how RL and IL can help boost the performance of generating paraphrases, with the pointer-generator as a base model. Experiments on the benchmark datasets show that (1) imitation learning is constantly better than reinforcement learning; and (2) the pointer-generator models with imitation learning outperform the state-of-the-art methods with a large margin.

Tasks

Reproductions