SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 5175 of 492 papers

TitleStatusHype
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural NetworksCode1
Poisoning Knowledge Graph Embeddings via Relation Inference PatternsCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Availability Attacks Create ShortcutsCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
Poison Ink: Robust and Invisible Backdoor AttackCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Adversarial Examples Make Strong PoisonsCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data PoisoningCode1
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy TradeoffCode1
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingCode1
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksCode1
Dynamic Defense Against Byzantine Poisoning Attacks in Federated LearningCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning AttacksCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
MetaPoison: Practical General-purpose Clean-label Data PoisoningCode1
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient ShapingCode1
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingCode1
Show:102550
← PrevPage 3 of 20Next →

No leaderboard results yet.