SOTAVerified

MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization

2022-05-01dialdoc (ACL) 2022Code Available0· sign in to hype

Xiachong Feng, Xiaocheng Feng, Bing Qin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently. However, current works mainly focus on English dialogue summarization, leaving other languages less well explored. Therefore, we present a multi-lingual dialogue summarization dataset, namely MSAMSum, which covers dialogue-summary pairs in six languages. Specifically, we derive MSAMSum from the standard SAMSum using sophisticated translation techniques and further employ two methods to ensure the integral translation quality and summary factual consistency. Given the proposed MSAMum, we systematically set up five multi-lingual settings for this task, including a novel mix-lingual dialogue summarization setting. To illustrate the utility of our dataset, we benchmark various experiments with pre-trained models under different settings and report results in both supervised and zero-shot manners. We also discuss some future works towards this task to motivate future researches.

Tasks

Reproductions