SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 6170 of 492 papers

TitleStatusHype
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data PoisoningCode1
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy TradeoffCode1
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingCode1
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksCode1
Dynamic Defense Against Byzantine Poisoning Attacks in Federated LearningCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning AttacksCode1
Show:102550
← PrevPage 7 of 50Next →

No leaderboard results yet.