SOTAVerified

GLAT: Glancing at Latent Variables for Parallel Text Generation

2022-05-01ACL 2022Code Available1· sign in to hype

Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently, parallel text generation has received widespread attention due to its success in generation efficiency. Although many advanced techniques are proposed to improve its generation quality, they still need the help of an autoregressive model for training to overcome the one-to-many multi-modal phenomenon in the dataset, limiting their applications. In this paper, we propose GLAT, which employs the discrete latent variables to capture word categorical information and invoke an advanced curriculum learning technique, alleviating the multi-modality problem. Experiment results show that our method outperforms strong baselines without the help of an autoregressive model, which further broadens the application scenarios of the parallel decoding paradigm.

Tasks

Reproductions