SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 7180 of 492 papers

TitleStatusHype
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
MetaPoison: Practical General-purpose Clean-label Data PoisoningCode1
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient ShapingCode1
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingCode1
Radioactive data: tracing through trainingCode1
Penalty Method for Inversion-Free Deep Bilevel OptimizationCode1
Stronger Data Poisoning Attacks Break Data Sanitization DefensesCode1
How To Backdoor Federated LearningCode1
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural NetworksCode1
Show:102550
← PrevPage 8 of 50Next →

No leaderboard results yet.