SOTAVerified

Privacy Preserving Deep Learning

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Papers

Showing 4150 of 59 papers

TitleStatusHype
Tempered Sigmoid Activations for Deep Learning with Differential PrivacyCode1
How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning0
MPC Protocol for G-module and its Application in Secure Compare and ReLU0
Security and Privacy Preserving Deep Learning0
Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks0
Locally Private Graph Neural NetworksCode1
ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret SharingCode1
Locally Differentially Private (Contextual) Bandits LearningCode0
Fawkes: Protecting Privacy against Unauthorized Deep Learning ModelsCode3
Privacy-Preserving Deep Learning Computation for Geo-Distributed Medical Big-Data Platforms0
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.