| MIA-Tuner: Adapting Large Language Models as Pre-training Text Detector | Aug 16, 2024 | Inference AttackMembership Inference Attack | CodeCode Available | 2 |
| RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models | Jun 16, 2024 | Adversarial AttackBenchmarking | CodeCode Available | 2 |
| Practical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration | Nov 10, 2023 | Inference AttackMembership Inference Attack | CodeCode Available | 2 |
| Rectifying Privacy and Efficacy Measurements in Machine Unlearning: A New Inference Attack Perspective | Jun 16, 2025 | Inference AttackMachine Unlearning | CodeCode Available | 1 |
| Membership Inference Attacks Against Vision-Language Models | Jan 27, 2025 | Inference AttackMembership Inference Attack | CodeCode Available | 1 |
| Technical Report for the Forgotten-by-Design Project: Targeted Obfuscation for Machine Learning | Jan 20, 2025 | Inference AttackMachine Unlearning | CodeCode Available | 1 |
| Membership Inference Attacks against Large Vision-Language Models | Nov 5, 2024 | Inference AttackMembership Inference Attack | CodeCode Available | 1 |
| Data Contamination Calibration for Black-box LLMs | May 20, 2024 | Inference AttackMembership Inference Attack | CodeCode Available | 1 |
| Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk | Mar 14, 2024 | Inference AttackMembership Inference Attack | CodeCode Available | 1 |
| FedMIA: An Effective Membership Inference Attack Exploiting "All for One" Principle in Federated Learning | Feb 9, 2024 | AllFederated Learning | CodeCode Available | 1 |