SOTAVerified

Neural Latent Extractive Document Summarization

2018-08-22EMNLP 2018Unverified0· sign in to hype

Xingxing Zhang, Mirella Lapata, Furu Wei, Ming Zhou

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Extractive summarization models require sentence-level labels, which are usually created heuristically (e.g., with rule-based methods) given that most summarization datasets only have document-summary pairs. Since these labels might be suboptimal, we propose a latent variable extractive model where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training the loss comes directly from gold summaries. Experiments on the CNN/Dailymail dataset show that our model improves over a strong extractive baseline trained on heuristically approximated labels and also performs competitively to several recent models.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CNN / Daily MailLatentROUGE-141.05Unverified

Reproductions