SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 391400 of 492 papers

TitleStatusHype
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy TradeoffCode1
Bait and Switch: Online Training Data Poisoning of Autonomous Driving Systems0
A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning0
Model-Agnostic Explanations using Minimal Forcing Subsets0
Concealed Data Poisoning Attacks on NLP Models0
GFL: A Decentralized Federated Learning Framework Based On Blockchain0
VenoMave: Targeted Poisoning Against Speech RecognitionCode0
Sniper GMMs: Structured Gaussian mixtures poison ML on large n small p data with high efficacy0
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing0
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems0
Show:102550
← PrevPage 40 of 50Next →

No leaderboard results yet.