SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 76100 of 492 papers

TitleStatusHype
Radioactive data: tracing through trainingCode1
Penalty Method for Inversion-Free Deep Bilevel OptimizationCode1
Stronger Data Poisoning Attacks Break Data Sanitization DefensesCode1
How To Backdoor Federated LearningCode1
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural NetworksCode1
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks0
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning0
Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs0
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual LearningCode0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
Data Shifts Hurt CoT: A Theoretical Study0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Securing Traffic Sign Recognition Systems in Autonomous Vehicles0
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Security Concerns for Large Language Models: A Survey0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
A Linear Approach to Data Poisoning0
BadSR: Stealthy Label Backdoor Attacks on Image Super-Resolution0
Show:102550
← PrevPage 4 of 20Next →

No leaderboard results yet.