SOTAVerified

Scale Where It Matters: Training-Free Localized Scaling for Diffusion Models

2026-03-16Unverified0· sign in to hype

Qin Ren, Yufei Wang, Lanqing Guo, Wen Zhang, Zhiwen Fan, Chenyu You

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Diffusion models have become the dominant paradigm in text-to-image generation, and test-time scaling (TTS) improves sample quality by allocating additional computation at inference. Existing TTS methods, however, resample the entire image, while generation quality is often spatially heterogeneous. This leads to unnecessary computation on regions that are already correct, and localized defects remain insufficiently corrected. In this paper, we explore a new direction - Localized TTS - that adaptively resamples defective regions while preserving high-quality regions, thereby substantially reducing the search space. This raises two challenges: accurately localizing defects and maintaining global consistency. We propose LoTTS, the first fully training-free framework for localized TTS. For defect localization, LoTTS contrasts cross- and self-attention signals under quality-aware prompts (e.g., high-quality vs. low-quality) to identify defective regions, and then refines them into coherent masks. For consistency, LoTTS perturbs only defective regions and denoises them locally, ensuring that corrections remain confined while the rest of the image remains undisturbed. Extensive experiments on SD2.1, SDXL, and FLUX demonstrate that LoTTS achieves state-of-the-art performance: it consistently improves both local quality and global fidelity, while reducing GPU cost by 2-4x compared to Best-of-N sampling. These findings establish localized TTS as a promising new direction for scaling diffusion models at inference time.

Reproductions