SOTAVerified

Privacy Preserving Deep Learning

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Papers

Showing 5159 of 59 papers

TitleStatusHype
Practical Privacy Filters and Odometers with Rényi Differential Privacy and Applications to Differentially Private Deep LearningCode0
Towards Fair and Privacy-Preserving Federated Deep ModelsCode0
Privacy in Practice: Private COVID-19 Detection in X-Ray Images (Extended Version)Code0
Can we Generalize and Distribute Private Representation Learning?Code0
Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep LearningCode0
Towards Secure and Practical Machine Learning via Secret Sharing and Random PermutationCode0
Privacy-Preserving Deep Learning Using Deformable Operators for Secure Task LearningCode0
Memorization of Named Entities in Fine-tuned BERT ModelsCode0
Bottlenecks CLUB: Unifying Information-Theoretic Trade-offs Among Complexity, Leakage, and UtilityCode0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.