SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 8190 of 492 papers

TitleStatusHype
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual LearningCode0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
Adversarial Robustness of Deep Learning Models for Inland Water Body Segmentation from SAR ImagesCode0
Game-Theoretic Unlearnable Example GeneratorCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsCode0
Attacking Black-box Recommendations via Copying Cross-domain User ProfilesCode0
Show:102550
← PrevPage 9 of 50Next →

No leaderboard results yet.