SOTAVerified

Model Criticism for Long-Form Text Generation

2022-10-16Code Available1· sign in to hype

Yuntian Deng, Volodymyr Kuleshov, Alexander M. Rush

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Language models have demonstrated the ability to generate highly fluent text; however, it remains unclear whether their output retains coherent high-level structure (e.g., story progression). Here, we propose to apply a statistical tool, model criticism in latent space, to evaluate the high-level structure of the generated text. Model criticism compares the distributions between real and generated data in a latent space obtained according to an assumptive generative process. Different generative processes identify specific failure modes of the underlying model. We perform experiments on three representative aspects of high-level discourse -- coherence, coreference, and topicality -- and find that transformer-based language models are able to capture topical structures but have a harder time maintaining structural coherence or modeling coreference.

Tasks

Reproductions