SOTAVerified

MixDiff: Mixing Natural and Synthetic Images for Robust Self-Supervised Representations

2024-06-18Code Available0· sign in to hype

Reza Akbarian Bafghi, Nidhin Harilal, Claire Monteleoni, Maziar Raissi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper introduces MixDiff, a new self-supervised learning (SSL) pre-training framework that combines real and synthetic images. Unlike traditional SSL methods that predominantly use real images, MixDiff uses a variant of Stable Diffusion to replace an augmented instance of a real image, facilitating the learning of cross real-synthetic image representations. Our key insight is that while models trained solely on synthetic images underperform, combining real and synthetic data leads to more robust and adaptable representations. Experiments show MixDiff enhances SimCLR, BarlowTwins, and DINO across various robustness datasets and domain transfer tasks, boosting SimCLR's ImageNet-1K accuracy by 4.56%. Our framework also demonstrates comparable performance without needing any augmentations, a surprising finding in SSL where augmentations are typically crucial.

Tasks

Reproductions