SOTAVerified

Privacy Preserving Deep Learning

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Papers

Showing 1120 of 59 papers

TitleStatusHype
The Paradox of Noise: An Empirical Study of Noise-Infusion Mechanisms to Improve Generalization, Stability, and Privacy in Federated Learning0
Mind the Gap: Federated Learning Broadens Domain Generalization in Diagnostic AI ModelsCode0
Split Without a Leak: Reducing Privacy Leakage in Split LearningCode1
Generative Model-Based Attack on Learnable Image Encryption for Privacy-Preserving Deep Learning0
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imagingCode0
Training Differentially Private Graph Neural Networks with Random Walk Sampling0
Memorization of Named Entities in Fine-tuned BERT ModelsCode0
Collaborative Training of Medical Artificial Intelligence Models with non-uniform LabelsCode0
Privacy in Practice: Private COVID-19 Detection in X-Ray Images (Extended Version)Code0
Privacy-preserving Deep Learning based Record Linkage0
Show:102550
← PrevPage 2 of 6Next →

No leaderboard results yet.