Summary Level Training of Sentence Rewriting for Abstractive Summarization
Sanghwan Bae, Taeuk Kim, Jihoon Kim, Sang-goo Lee
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CNN / Daily Mail | BERT-ext + abs + RL + rerank | ROUGE-1 | 41.9 | — | Unverified |