SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 351375 of 492 papers

TitleStatusHype
Get a Model! Model Hijacking Attack Against Machine Learning Models0
Mitigating Data Poisoning in Text Classification with Differential Privacy0
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data PoisoningCode0
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations0
Protecting Proprietary Data: Poisoning for Secure Dataset Release0
Backdoor Attack and Defense for Deep Regression0
Excess Capacity and Backdoor PoisoningCode0
Certifiers Make Neural Networks Vulnerable to Availability Attacks0
ABC-FL: Anomalous and Benign client Classification in Federated Learning0
Classification Auto-Encoder based Detector against Diverse Data Poisoning AttacksCode0
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles0
Derivative-free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems0
Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Fairness-aware Summarization for Justified Decision-Making0
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
Understanding the Limits of Unsupervised Domain Adaptation via Data PoisoningCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
On the Effectiveness of Poisoning against Unsupervised Domain Adaptation0
Data Poisoning Won’t Save You From Facial Recognition0
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers0
Gradient-based Data Subversion Attack Against Binary Classifiers0
A Gradient Method for Multilevel Optimization0
Show:102550
← PrevPage 15 of 20Next →

No leaderboard results yet.