SOTAVerified

Privacy Preserving Deep Learning

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Papers

Showing 4150 of 59 papers

TitleStatusHype
Practical Privacy Filters and Odometers with Rényi Differential Privacy and Applications to Differentially Private Deep LearningCode0
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models0
Can we Generalize and Distribute Private Representation Learning?Code0
Secure Data Sharing With Flow ModelCode0
GuardNN: Secure Accelerator Architecture for Privacy-Preserving Deep Learning0
How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning0
MPC Protocol for G-module and its Application in Secure Compare and ReLU0
Security and Privacy Preserving Deep Learning0
Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks0
Locally Differentially Private (Contextual) Bandits LearningCode0
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.