Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer
2021-05-14ACL 2021Code Available1· sign in to hype
Huiyuan Lai, Antonio Toral, Malvina Nissim
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/laihuiyuan/Pre-trained-formality-transferOfficialIn paperpytorch★ 30
Abstract
Scarcity of parallel data causes formality style transfer models to have scarce success in preserving content. We show that fine-tuning pre-trained language (GPT-2) and sequence-to-sequence (BART) models boosts content preservation, and that this is possible even with limited amounts of parallel data. Augmenting these models with rewards that target style and content -- the two core aspects of the task -- we achieve a new state-of-the-art.