SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 481490 of 492 papers

TitleStatusHype
Data Poisoning Attacks in Contextual Bandits0
How To Backdoor Federated LearningCode1
Is feature selection secure against training data poisoning?0
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural NetworksCode1
Label Sanitization against Label Flipping Poisoning Attacks0
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe NoiseCode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications0
Targeted Backdoor Attacks on Deep Learning Systems Using Data PoisoningCode0
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization0
Show:102550
← PrevPage 49 of 50Next →

No leaderboard results yet.