SOTAVerified

Masked Diffusion as Self-supervised Representation Learner

2023-08-10Code Available1· sign in to hype

Zixuan Pan, Jianxu Chen, Yiyu Shi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Denoising diffusion probabilistic models have recently demonstrated state-of-the-art generative performance and have been used as strong pixel-level representation learners. This paper decomposes the interrelation between the generative capability and representation learning ability inherent in diffusion models. We present the masked diffusion model (MDM), a scalable self-supervised representation learner for semantic segmentation, substituting the conventional additive Gaussian noise of traditional diffusion with a masking mechanism. Our proposed approach convincingly surpasses prior benchmarks, demonstrating remarkable advancements in both medical and natural image semantic segmentation tasks, particularly in few-shot scenarios.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
GlaSMDMF191.95Unverified
MoNuSegMDMF181.01Unverified

Reproductions