SOTAVerified

Style Transfer for Texts: Retrain, Report Errors, Compare with Rewrites

2019-08-19IJCNLP 2019Code Available0· sign in to hype

Alexey Tikhonov, Viacheslav Shibaev, Aleksander Nagaev, Aigul Nugmanova, Ivan P. Yamshchikov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper shows that standard assessment methodology for style transfer has several significant problems. First, the standard metrics for style accuracy and semantics preservation vary significantly on different re-runs. Therefore one has to report error margins for the obtained results. Second, starting with certain values of bilingual evaluation understudy (BLEU) between input and output and accuracy of the sentiment transfer the optimization of these two standard metrics diverge from the intuitive goal of the style transfer task. Finally, due to the nature of the task itself, there is a specific dependence between these two metrics that could be easily manipulated. Under these circumstances, we suggest taking BLEU between input and human-written reformulations into consideration for benchmarks. We also propose three new architectures that outperform state of the art in terms of this metric.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Yelp Review Dataset (Small)SAE+DiscriminatorG-Score (BLEU, Accuracy)74.56Unverified

Reproductions