SOTAVerified

Re-evaluating Evaluation in Text Summarization

2020-10-14EMNLP 2020Code Available1· sign in to hype

Manik Bhandari, Pranav Gour, Atabak Ashfaq, PengFei Liu, Graham Neubig

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Automated evaluation metrics as a stand-in for manual evaluation are an essential part of the development of text-generation tasks such as text summarization. However, while the field has progressed, our standard metrics have not -- for nearly 20 years ROUGE has been the standard evaluation in most summarization papers. In this paper, we make an attempt to re-evaluate the evaluation method for text summarization: assessing the reliability of automatic metrics using top-scoring system outputs, both abstractive and extractive, on recently popular datasets for both system-level and summary-level evaluation settings. We find that conclusions about evaluation metrics on older datasets do not necessarily hold on modern datasets and systems.

Tasks

Reproductions