SOTAVerified

Privacy Preserving Deep Learning

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Papers

Showing 150 of 59 papers

TitleStatusHype
A Training Framework for Optimal and Stable Training of Polynomial Neural NetworksCode0
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation0
Split-n-Chain: Privacy-Preserving Multi-Node Split Learning with Blockchain-Based Auditability0
Just a Simple Transformation is Enough for Data Protection in Vertical Federated LearningCode0
Privacy-Preserving Student Learning with Differentially Private Data-Free Distillation0
DCT-CryptoNets: Scaling Private Inference in the Frequency DomainCode1
Low-Latency Privacy-Preserving Deep Learning Design via Secure MPC0
Enhancing the Utility of Privacy-Preserving Cancer Classification using Synthetic DataCode2
Privacy-Preserving Deep Learning Using Deformable Operators for Secure Task LearningCode0
Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption0
The Paradox of Noise: An Empirical Study of Noise-Infusion Mechanisms to Improve Generalization, Stability, and Privacy in Federated Learning0
Mind the Gap: Federated Learning Broadens Domain Generalization in Diagnostic AI ModelsCode0
Split Without a Leak: Reducing Privacy Leakage in Split LearningCode1
Generative Model-Based Attack on Learnable Image Encryption for Privacy-Preserving Deep Learning0
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imagingCode0
Training Differentially Private Graph Neural Networks with Random Walk Sampling0
Memorization of Named Entities in Fine-tuned BERT ModelsCode0
Collaborative Training of Medical Artificial Intelligence Models with non-uniform LabelsCode0
Privacy in Practice: Private COVID-19 Detection in X-Ray Images (Extended Version)Code0
Privacy-preserving Deep Learning based Record Linkage0
Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator0
Privacy-Preserving Deep Learning Model for Covid-19 Disease Detection0
Bottlenecks CLUB: Unifying Information-Theoretic Trade-offs Among Complexity, Leakage, and UtilityCode0
Securing the Classification of COVID-19 in Chest X-ray Images: A Privacy-Preserving Deep Learning Approach0
Communication-Efficient Federated Distillation with Active Data Sampling0
Backpropagation Clipping for Deep Learning with Differential PrivacyCode0
DP-FP: Differentially Private Forward Propagation for Large Models0
SoK: Privacy-preserving Deep Learning with Homomorphic Encryption0
Homogeneous Learning: Self-Attention Decentralized Deep LearningCode0
Towards Secure and Practical Machine Learning via Secret Sharing and Random PermutationCode0
Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep LearningCode0
Towards a Privacy-preserving Deep Learning-based Network Intrusion Detection in Data Distribution Services0
Antipodes of Label Differential Privacy: PATE and ALIBICode1
Variational Leakage: The Role of Information Complexity in Privacy LeakageCode0
CryptGPU: Fast Privacy-Preserving Machine Learning on the GPUCode1
Practical Privacy Filters and Odometers with Rényi Differential Privacy and Applications to Differentially Private Deep LearningCode0
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models0
Can we Generalize and Distribute Private Representation Learning?Code0
Secure Data Sharing With Flow ModelCode0
GuardNN: Secure Accelerator Architecture for Privacy-Preserving Deep Learning0
Tempered Sigmoid Activations for Deep Learning with Differential PrivacyCode1
How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning0
MPC Protocol for G-module and its Application in Secure Compare and ReLU0
Security and Privacy Preserving Deep Learning0
Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks0
Locally Private Graph Neural NetworksCode1
ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret SharingCode1
Locally Differentially Private (Contextual) Bandits LearningCode0
Fawkes: Protecting Privacy against Unauthorized Deep Learning ModelsCode3
Privacy-Preserving Deep Learning Computation for Geo-Distributed Medical Big-Data Platforms0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.