SOTAVerified

Fine-grained Controllable Text Generation through In-context Learning with Feedback

2024-06-17Unverified0· sign in to hype

Sarubi Thillainathan, Alexander Koller

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present a method for rewriting an input sentence to match specific values of nontrivial linguistic features, such as dependency depth. In contrast to earlier work, our method uses in-context learning rather than finetuning, making it applicable in use cases where data is sparse. We show that our model performs accurate rewrites and matches the state of the art on rewriting sentences to a specified school grade level.

Tasks

Reproductions