SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 4150 of 492 papers

TitleStatusHype
Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data0
Data Poisoning Attacks to Locally Differentially Private Range Query Protocols0
Approaching the Harm of Gradient Attacks While Only Flipping Labels0
No, of course I can! Refusal Mechanisms Can Be Exploited Using Harmless Fine-Tuning Data0
Atlas: A Framework for ML Lifecycle Provenance & Transparency0
Swallowing the Poison Pills: Insights from Vulnerability Disparity Among LLMs0
Keeping up with dynamic attackers: Certifying robustness to adaptive online data poisoningCode0
FedNIA: Noise-Induced Activation Analysis for Mitigating Data Poisoning in FL0
Multi-Faceted Studies on Data Poisoning can Advance LLM DevelopmentCode0
A Robust Attack: Displacement Backdoor Attack0
Show:102550
← PrevPage 5 of 50Next →

No leaderboard results yet.