SOTAVerified

Self-Supervised Pretraining for Differentially Private Learning

2022-06-14Code Available0· sign in to hype

Arash Asadian, Evan Weidner, Lei Jiang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We demonstrate self-supervised pretraining (SSP) is a scalable solution to deep learning with differential privacy (DP) regardless of the size of available public datasets in image classification. When facing the lack of public datasets, we show the features generated by SSP on only one single image enable a private classifier to obtain much better utility than the non-learned handcrafted features under the same privacy budget. When a moderate or large size public dataset is available, the features produced by SSP greatly outperform the features trained with labels on various complex private datasets under the same private budget. We also compared multiple DP-enabled training frameworks to train a private classifier on the features generated by SSP. Finally, we report a non-trivial utility 25.3\% of a private ImageNet-1K dataset when =3. Our source code can be found at https://github.com/UnchartedRLab/SSP.

Tasks

Reproductions