Non-autoregressive Transformer by Position Learning
2019-11-25Unverified0· sign in to hype
Yu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, Shu-Jian Huang, Jia-Jun Chen, Lei LI
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Non-autoregressive models are promising on various text generation tasks. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling is an essential problem in non-autoregressive text generation. In this study, we propose PNAT, which incorporates positions as a latent variable into the text generative process. Experimental results show that PNAT achieves top results on machine translation and paraphrase generation tasks, outperforming several strong baselines.