SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 341350 of 492 papers

TitleStatusHype
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles0
Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Derivative-free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems0
Fairness-aware Summarization for Justified Decision-Making0
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Understanding the Limits of Unsupervised Domain Adaptation via Data PoisoningCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
Data Poisoning Won't Save You From Facial RecognitionCode1
Adversarial Examples Make Strong PoisonsCode1
Data Poisoning Won’t Save You From Facial Recognition0
Show:102550
← PrevPage 35 of 50Next →

No leaderboard results yet.