SOTAVerified

Privacy Preserving Deep Learning

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Papers

Showing 110 of 59 papers

TitleStatusHype
Fawkes: Protecting Privacy against Unauthorized Deep Learning ModelsCode3
Enhancing the Utility of Privacy-Preserving Cancer Classification using Synthetic DataCode2
Tempered Sigmoid Activations for Deep Learning with Differential PrivacyCode1
ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret SharingCode1
Antipodes of Label Differential Privacy: PATE and ALIBICode1
Split Without a Leak: Reducing Privacy Leakage in Split LearningCode1
DCT-CryptoNets: Scaling Private Inference in the Frequency DomainCode1
CryptGPU: Fast Privacy-Preserving Machine Learning on the GPUCode1
Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New DatasetCode1
Locally Private Graph Neural NetworksCode1
Show:102550
← PrevPage 1 of 6Next →

No leaderboard results yet.