ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/microsoft/ProphetNetOfficialIn paperpytorch★ 744
- github.com/huggingface/transformerspytorch★ 158,292
- github.com/microsoft/ar2pytorch★ 70
- github.com/d294270681/ProphetNet-paddlepaddle★ 1
- github.com/MS-P3/code7/tree/main/xlm_prophetnetmindspore★ 0
Abstract
This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of optimizing one-step-ahead prediction in the traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction that predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large-scale dataset (160GB), respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training corpus.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CNN / Daily Mail | ProphetNet | ROUGE-1 | 44.2 | — | Unverified |