| DP-GPL: Differentially Private Graph Prompt Learning | Mar 13, 2025 | Inference AttackMembership Inference Attack | —Unverified | 0 |
| AugMixCloak: A Defense against Membership Inference Attacks via Image Transformation | May 11, 2025 | Data AugmentationFederated Learning | —Unverified | 0 |
| AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models | Feb 4, 2023 | Decision MakingExplainable artificial intelligence | —Unverified | 0 |
| Effectiveness of L2 Regularization in Privacy-Preserving Machine Learning | Dec 2, 2024 | Inference AttackL2 Regularization | —Unverified | 0 |
| Low-Cost Privacy-Preserving Decentralized Learning | Mar 18, 2024 | Inference AttackMembership Inference Attack | —Unverified | 0 |
| Do Backdoors Assist Membership Inference Attacks? | Mar 22, 2023 | Inference AttackMembership Inference Attack | —Unverified | 0 |
| A hierarchical approach for assessing the vulnerability of tree-based classification models to membership inference attack | Feb 13, 2025 | Inference AttackMembership Inference Attack | —Unverified | 0 |
| Epsilon*: Privacy Metric for Machine Learning Models | Jul 21, 2023 | Inference AttackMembership Inference Attack | —Unverified | 0 |
| Graph-Level Label-Only Membership Inference Attack against Graph Neural Networks | Mar 24, 2025 | Graph ClassificationInference Attack | —Unverified | 0 |
| Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models | May 24, 2023 | Inference AttackMembership Inference Attack | —Unverified | 0 |