SOTAVerified

StyleDGPT: Stylized Response Generation with Pre-trained Language Models

2020-10-06Findings of the Association for Computational LinguisticsCode Available1· sign in to hype

Ze Yang, Wei Wu, Can Xu, Xinnian Liang, Jiaqi Bai, Liran Wang, Wei Wang, Zhoujun Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training. In this work, we explore the challenging task with pre-trained language models that have brought breakthrough to various natural language tasks. To this end, we introduce a KL loss and a style classifier to the fine-tuning step in order to steer response generation towards the target style in both a word-level and a sentence-level. Comprehensive empirical studies with two public datasets indicate that our model can significantly outperform state-of-the-art methods in terms of both style consistency and contextual coherence.

Tasks

Reproductions