SOTAVerified

Models and Datasets for Cross-Lingual Summarisation

2022-02-19EMNLP 2021Code Available1· sign in to hype

Laura Perez-Beltrachini, Mirella Lapata

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed cross-lingual summarisation task with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with multi-lingual pre-trained models in supervised, zero- and few-shot, and out-of-domain scenarios.

Tasks

Reproductions