SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 341350 of 492 papers

TitleStatusHype
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors0
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Adversarial Vulnerability of Active Transfer Learning0
A Framework of Randomized Selection Based Certified Defenses Against Data Poisoning Attacks0
A GAN-based data poisoning framework against anomaly detection in vertical federated learning0
A Geometric Approach to Problems in Optimization and Data Science0
A Gradient Method for Multilevel Optimization0
A Linear Approach to Data Poisoning0
A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters0
Show:102550
← PrevPage 35 of 50Next →

No leaderboard results yet.