| Protecting Global Properties of Datasets with Distribution Privacy Mechanisms | Jul 18, 2022 | AttributeInference Attack | CodeCode Available | 0 |
| Can Graph Neural Networks Expose Training Data Properties? An Efficient Risk Assessment Approach | Nov 6, 2024 | DiversityInference Attack | CodeCode Available | 0 |
| Disparate Vulnerability to Membership Inference Attacks | Jun 2, 2019 | BIG-bench Machine LearningFairness | CodeCode Available | 0 |
| Quantifying identifiability to choose and audit ε in differentially private deep learning | Mar 4, 2021 | BIG-bench Machine LearningInference Attack | CodeCode Available | 0 |
| Enhancing Real-World Adversarial Patches through 3D Modeling of Complex Target Scenes | Feb 10, 2021 | Adversarial AttackInference Attack | CodeCode Available | 0 |
| SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems | Sep 14, 2023 | Feature EngineeringInference Attack | CodeCode Available | 0 |
| MIA-BAD: An Approach for Enhancing Membership Inference Attack and its Mitigation with Federated Learning | Nov 28, 2023 | Federated LearningInference Attack | CodeCode Available | 0 |
| DUCK: Distance-based Unlearning via Centroid Kinematics | Dec 4, 2023 | Inference AttackMachine Unlearning | CodeCode Available | 0 |
| SNAP: Efficient Extraction of Private Properties with Poisoning | Aug 25, 2022 | Inference Attack | CodeCode Available | 0 |
| DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning | Dec 24, 2021 | BIG-bench Machine LearningInference Attack | CodeCode Available | 0 |