SOTAVerified

Inference Time Style Control for Summarization

2021-04-05NAACL 2021Unverified0· sign in to hype

Shuyang Cao, Lu Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

How to generate summaries of different styles without requiring corpora in the target styles, or training separate models? We present two novel methods that can be deployed during summary decoding on any pre-trained Transformer-based summarization model. (1) Decoder state adjustment instantly modifies decoder final states with externally trained style scorers, to iteratively refine the output against a target style. (2) Word unit prediction constrains the word usage to impose strong lexical control during generation. In experiments of summarizing with simplicity control, automatic evaluation and human judges both find our models producing outputs in simpler languages while still informative. We also generate news headlines with various ideological leanings, which can be distinguished by humans with a reasonable probability.

Tasks

Reproductions