SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 51100 of 492 papers

TitleStatusHype
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural NetworksCode1
Poisoning Knowledge Graph Embeddings via Relation Inference PatternsCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Availability Attacks Create ShortcutsCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
Poison Ink: Robust and Invisible Backdoor AttackCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Adversarial Examples Make Strong PoisonsCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data PoisoningCode1
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy TradeoffCode1
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingCode1
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksCode1
Dynamic Defense Against Byzantine Poisoning Attacks in Federated LearningCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning AttacksCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
MetaPoison: Practical General-purpose Clean-label Data PoisoningCode1
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient ShapingCode1
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingCode1
Radioactive data: tracing through trainingCode1
Penalty Method for Inversion-Free Deep Bilevel OptimizationCode1
Stronger Data Poisoning Attacks Break Data Sanitization DefensesCode1
How To Backdoor Federated LearningCode1
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural NetworksCode1
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks0
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning0
Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs0
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual LearningCode0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
Data Shifts Hurt CoT: A Theoretical Study0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Securing Traffic Sign Recognition Systems in Autonomous Vehicles0
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Security Concerns for Large Language Models: A Survey0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
A Linear Approach to Data Poisoning0
BadSR: Stealthy Label Backdoor Attacks on Image Super-Resolution0
Show:102550
← PrevPage 2 of 10Next →

No leaderboard results yet.