SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 361370 of 492 papers

TitleStatusHype
Certifiers Make Neural Networks Vulnerable to Availability Attacks0
ABC-FL: Anomalous and Benign client Classification in Federated Learning0
Classification Auto-Encoder based Detector against Diverse Data Poisoning AttacksCode0
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles0
Derivative-free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems0
Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Fairness-aware Summarization for Justified Decision-Making0
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Understanding the Limits of Unsupervised Domain Adaptation via Data PoisoningCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
Show:102550
← PrevPage 37 of 50Next →

No leaderboard results yet.