SOTAVerified

A Focused Study on Sequence Length for Dialogue Summarization

2022-09-24Code Available0· sign in to hype

Bin Wang, Chen Zhang, Chengwei Wei, Haizhou Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Output length is critical to dialogue summarization systems. The dialogue summary length is determined by multiple factors, including dialogue complexity, summary objective, and personal preferences. In this work, we approach dialogue summary length from three perspectives. First, we analyze the length differences between existing models' outputs and the corresponding human references and find that summarization models tend to produce more verbose summaries due to their pretraining objectives. Second, we identify salient features for summary length prediction by comparing different model settings. Third, we experiment with a length-aware summarizer and show notable improvement on existing models if summary length can be well incorporated. Analysis and experiments are conducted on popular DialogSum and SAMSum datasets to validate our findings.

Tasks

Reproductions