SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 451460 of 492 papers

TitleStatusHype
Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest0
FR-GAN: Fair and Robust Training0
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing0
Detection of Backdoors in Trained Classifiers Without Access to the Training Set0
On Defending Against Label Flipping Attacks on Malware Detection Systems0
Seeing is Not Believing: Camouflage Attacks on Image Scaling AlgorithmsCode0
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Poisoning Attacks with Generative Adversarial NetsCode0
Mixed Strategy Game Model Against Data Poisoning Attacks0
An Investigation of Data Poisoning Defenses for Online Learning0
Show:102550
← PrevPage 46 of 50Next →

No leaderboard results yet.