SOTAVerified

PNAT: Non-autoregressive Transformer by Position Learning

2019-09-25Unverified0· sign in to hype

Yu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, ShuJian Huang, Jiajun Chen, Lei LI

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Non-autoregressive generation is a new paradigm for text generation. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling of output words is an essential problem in non-autoregressive text generation. In this paper, we propose PNAT, which explicitly models positions of output words as latent variables in text generation. The proposed PNATis simple yet effective. Experimental results show that PNATgives very promising results in machine translation and paraphrase generation tasks, outperforming many strong baselines.

Tasks

Reproductions