SOTAVerified

To Burst or Not to Burst: Generating and Quantifying Improbable Text

2024-01-27Code Available0· sign in to hype

Kuleen Sasse, Samuel Barham, Efsun Sarioglu Kayi, Edward W. Staley

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While large language models (LLMs) are extremely capable at text generation, their outputs are still distinguishable from human-authored text. We explore this separation across many metrics over text, many sampling techniques, many types of text data, and across two popular LLMs, LLaMA and Vicuna. Along the way, we introduce a new metric, recoverability, to highlight differences between human and machine text; and we propose a new sampling technique, burst sampling, designed to close this gap. We find that LLaMA and Vicuna have distinct distributions under many of the metrics, and that this influences our results: Recoverability separates real from fake text better than any other metric when using LLaMA. When using Vicuna, burst sampling produces text which is distributionally closer to real text compared to other sampling techniques.

Tasks

Reproductions