SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 441450 of 492 papers

TitleStatusHype
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Regularization Helps with Mitigating Poisoning Attacks: Distributionally-Robust Machine Learning Using the Wasserstein Distance0
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning0
Deep Probabilistic Models to Detect Data Poisoning Attacks0
Proving Data-Poisoning Robustness in Decision Trees0
Data Poisoning Attacks on Neighborhood-based Recommender Systems0
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning0
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic0
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Show:102550
← PrevPage 45 of 50Next →

No leaderboard results yet.