SOTAVerified

A Theoretical Characterization of Optimal Data Augmentations in Self-Supervised Learning

2024-11-04Unverified0· sign in to hype

Shlomo Libo Feigin, Maximilian Fleissner, Debarghya Ghoshdastidar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Data augmentations play an important role in the recent success of Self-Supervised Learning (SSL). While commonly viewed as encoding invariances into the learned representations, this interpretation overlooks the impact of the pretraining architecture and suggests that SSL would require diverse augmentations which resemble the data to work well. However, these assumptions do not align with empirical evidence, encouraging further theoretical understanding to guide the principled design of augmentations in new domains. To this end, we use kernel theory to derive analytical expressions for data augmentations that achieve desired target representations after pretraining. We consider two popular non-contrastive losses, VICReg and Barlow Twins, and provide an algorithm to construct such augmentations. Our analysis shows that augmentations need not be similar to the data to learn useful representations, nor be diverse, and that the architecture has a significant impact on the optimal augmentations.

Tasks

Reproductions