SOTAVerified

Simple and Effective Masked Diffusion Language Models

2024-06-11Code Available4· sign in to hype

Subham Sekhar Sahoo, Marianne Arriola, Yair Schiff, Aaron Gokaslan, Edgar Marroquin, Justin T Chiu, Alexander Rush, Volodymyr Kuleshov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While diffusion models excel at generating high-quality images, prior work reports a significant performance gap between diffusion and autoregressive (AR) methods in language modeling. In this work, we show that simple masked discrete diffusion is more performant than previously thought. We apply an effective training recipe that improves the performance of masked diffusion models and derive a simplified, Rao-Blackwellized objective that results in additional improvements. Our objective has a simple form -- it is a mixture of classical masked language modeling losses -- and can be used to train encoder-only language models that admit efficient samplers, including ones that can generate arbitrary lengths of text semi-autoregressively like a traditional language model. On language modeling benchmarks, a range of masked diffusion models trained with modern engineering practices achieves a new state-of-the-art among diffusion models, and approaches AR perplexity. We provide the code, along with a blog post and video tutorial on the project page: https://s-sahoo.com/mdlm

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
One Billion WordMDLM (AR baseline)PPL20.09Unverified
One Billion WordMDLMPPL23Unverified
OpenWebTextARMeval_perplexity17.54Unverified
OpenWebTextMDLMeval_perplexity22.98Unverified

Reproductions