SOTAVerified

USP: Unified Self-Supervised Pretraining for Image Generation and Understanding

2025-03-08Code Available2· sign in to hype

Xiangxiang Chu, Renda Li, Yong Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent studies have highlighted the interplay between diffusion models and representation learning. Intermediate representations from diffusion models can be leveraged for downstream visual tasks, while self-supervised vision models can enhance the convergence and generation quality of diffusion models. However, transferring pretrained weights from vision models to diffusion models is challenging due to input mismatches and the use of latent spaces. To address these challenges, we propose Unified Self-supervised Pretraining (USP), a framework that initializes diffusion models via masked latent modeling in a Variational Autoencoder (VAE) latent space. USP achieves comparable performance in understanding tasks while significantly improving the convergence speed and generation quality of diffusion models. Our code will be publicly available at https://github.com/cxxgtxy/USP.

Tasks

Reproductions