SOTAVerified

ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding

2026-03-05Code Available2· sign in to hype

Jia-Nan Li, Jian Guan, Wei Wu, Chongxuan Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Autoregressive models (ARMs) are hindered by slow sequential inference. While masked diffusion models (MDMs) offer a parallel alternative, they suffer from critical drawbacks: high computational overhead from precluding Key-Value (KV) caching, and incoherent generation arising from learning dependencies over an intractable space of token combinations. To address these limitations, we introduce ReFusion, a novel masked diffusion model that integrates sequence reorganization into the causal attention framework. By elevating parallel decoding from the token level to a higher slot level, ReFusion interleaves inter-slot diffusion-based selection with intra-slot autoregressive infilling, while reordering newly generated slots ahead of the remaining masks after each iteration. Consequently, this design simultaneously unlocks full KV cache reuse and reduces learning complexity from an intractable token combination space to a manageable slot-level permutation space. Extensive experiments on seven diverse benchmarks show that ReFusion not only overwhelmingly surpasses prior MDMs with a 34\% performance gain and an over 18 speedup on average, but also bridges the performance gap to strong ARMs while maintaining a 2.33 average speedup.

Reproductions