SOTAVerified

Neural CRF Model for Sentence Alignment in Text Simplification

2020-05-05ACL 2020Code Available1· sign in to hype

Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, Wei Xu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The success of a text simplification system heavily depends on the quality and quantity of complex-simple sentence pairs in the training corpus, which are extracted by aligning sentences between parallel articles. To evaluate and improve sentence alignment quality, we create two manually annotated sentence-aligned datasets from two commonly used text simplification corpora, Newsela and Wikipedia. We propose a novel neural CRF alignment model which not only leverages the sequential nature of sentences in parallel documents but also utilizes a neural sentence pair model to capture semantic similarity. Experiments demonstrate that our proposed approach outperforms all the previous work on monolingual sentence alignment task by more than 5 points in F1. We apply our CRF aligner to construct two new text simplification datasets, Newsela-Auto and Wiki-Auto, which are much larger and of better quality compared to the existing datasets. A Transformer-based seq2seq model trained on our datasets establishes a new state-of-the-art for text simplification in both automatic and human evaluation.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
NewselaCRF Alignment + TransformerSARI36.6Unverified

Reproductions