SOTAVerified

Style Transfer in Text: Exploration and Evaluation

2017-11-18Code Available0· sign in to hype

Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, Rui Yan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Style transfer is an important problem in natural language processing (NLP). However, the progress in language style transfer is lagged behind other domains, such as computer vision, mainly because of the lack of parallel data and principle evaluation metrics. In this paper, we propose to learn style transfer with non-parallel data. We explore two models to achieve this goal, and the key idea behind the proposed models is to learn separate content representations and style representations using adversarial networks. We also propose novel evaluation metrics which measure two aspects of style transfer: transfer strength and content preservation. We access our models and the evaluation metrics on two tasks: paper-news title transfer, and positive-negative review transfer. Results show that the proposed content preservation metric is highly correlate to human judgments, and the proposed models are able to generate sentences with higher style transfer strength and similar content preservation score comparing to auto-encoder.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Yelp Review Dataset (Small)MultiDecoderG-Score (BLEU, Accuracy)45.02Unverified
Yelp Review Dataset (Small)StyleEmbeddingG-Score (BLEU, Accuracy)31.31Unverified

Reproductions