SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 381390 of 492 papers

TitleStatusHype
Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks0
Federated Unlearning0
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers0
Mitigating the Impact of Adversarial Attacks in Very Deep Networks0
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Dimensionality reduction, regularization, and generalization in overparameterized regressionsCode0
Show:102550
← PrevPage 39 of 50Next →

No leaderboard results yet.