Text Style Transfer
Text Style Transfer is the task of controlling certain attributes of generated text. The state-of-the-art methods can be categorized into two main types which are used on parallel and non-parallel data. Methods on parallel data are typically supervised methods that use a neural sequence-to-sequence model with the encoder-decoder architecture. Methods on non-parallel data are usually unsupervised approaches using Disentanglement, Prototype Editing and Pseudo-Parallel Corpus Construction.
The popular benchmark for this task is the Yelp Review Dataset. Models are typically evaluated with the metrics of Sentiment Accuracy, BLEU, and PPL.
Papers
Showing 151–186 of 186 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | SAE+Discriminator | G-Score (BLEU, Accuracy) | 74.56 | — | Unverified |
| 2 | LatentOps (Few shot) | G-Score (BLEU, Accuracy) | 71.6 | — | Unverified |
| 3 | SentiInc | G-Score (BLEU, Accuracy) | 66.25 | — | Unverified |
| 4 | DeleteAndRetrieve | G-Score (BLEU, Accuracy) | 54.64 | — | Unverified |
| 5 | DeleteOnly | G-Score (BLEU, Accuracy) | 54.11 | — | Unverified |
| 6 | MultiDecoder | G-Score (BLEU, Accuracy) | 45.02 | — | Unverified |
| 7 | CAE | G-Score (BLEU, Accuracy) | 38.66 | — | Unverified |
| 8 | StyleEmbedding | G-Score (BLEU, Accuracy) | 31.31 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | SentiInc | G-Score (BLEU, Accuracy) | 59.17 | — | Unverified |
| 2 | StyleEmb | BLEU | 30 | — | Unverified |