SOTAVerified

Disentangled Representation Learning for Non-Parallel Text Style Transfer

2018-08-13ACL 2019Code Available0· sign in to hype

Vineet John, Lili Mou, Hareesh Bahuleyan, Olga Vechtomova

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper tackles the problem of disentangling the latent variables of style and content in language models. We propose a simple yet effective approach, which incorporates auxiliary multi-task and adversarial objectives, for label prediction and bag-of-words prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space. This disentangled latent representation learning method is applied to style transfer on non-parallel corpora. We achieve substantially better results in terms of transfer accuracy, content preservation and language fluency, in comparison to previous state-of-the-art approaches.

Tasks

Reproductions