SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 471480 of 492 papers

TitleStatusHype
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?Code0
Defending Regression Learners Against Poisoning AttacksCode0
Defending Distributed Classifiers Against Data Poisoning AttacksCode0
VenoMave: Targeted Poisoning Against Speech RecognitionCode0
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical SystemsCode0
Certified Robustness to Data Poisoning in Gradient-Based TrainingCode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
Excess Capacity and Backdoor PoisoningCode0
Adversarial Robustness of Deep Learning Models for Inland Water Body Segmentation from SAR ImagesCode0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Show:102550
← PrevPage 48 of 50Next →

No leaderboard results yet.