SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 281290 of 492 papers

TitleStatusHype
Neural network fragile watermarking with no model performance degradation0
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning AttacksCode1
Lethal Dose Conjecture on Data PoisoningCode0
Testing the Robustness of Learned Index StructuresCode0
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Backdoor Attacks on Crowd CountingCode1
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection SystemsCode1
Autoregressive Perturbations for Data PoisoningCode1
Show:102550
← PrevPage 29 of 50Next →

No leaderboard results yet.