| Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks? | Feb 14, 2024 | Inference AttackMembership Inference Attack | —Unverified | 0 |
| FedMIA: An Effective Membership Inference Attack Exploiting "All for One" Principle in Federated Learning | Feb 9, 2024 | AllFederated Learning | CodeCode Available | 1 |
| Impact of Dataset Properties on Membership Inference Vulnerability of Deep Transfer Learning | Feb 7, 2024 | Image ClassificationInference Attack | —Unverified | 0 |
| De-identification is not always enough | Jan 31, 2024 | De-identificationInference Attack | —Unverified | 0 |
| Physical Trajectory Inference Attack and Defense in Decentralized POI Recommendation | Jan 26, 2024 | Inference AttackPrivacy Preserving | —Unverified | 0 |
| Inference Attacks Against Face Recognition Model without Classification Layers | Jan 24, 2024 | Face RecognitionGenerative Adversarial Network | —Unverified | 0 |
| Differentially Private and Adversarially Robust Machine Learning: An Empirical Evaluation | Jan 18, 2024 | Inference AttackMembership Inference Attack | —Unverified | 0 |
| Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment | Jan 2, 2024 | Inference AttackMembership Inference Attack | CodeCode Available | 0 |
| Task Contamination: Language Models May Not Be Few-Shot Anymore | Dec 26, 2023 | Inference AttackMembership Inference Attack | —Unverified | 0 |
| Reinforcement Unlearning | Dec 26, 2023 | Inference AttackMachine Unlearning | CodeCode Available | 1 |