SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 125 of 492 papers

TitleStatusHype
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks0
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning0
Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs0
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual LearningCode0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
Data Shifts Hurt CoT: A Theoretical Study0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Securing Traffic Sign Recognition Systems in Autonomous Vehicles0
VLMs Can Aggregate Scattered Training PatchesCode1
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Security Concerns for Large Language Models: A Survey0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
A Linear Approach to Data Poisoning0
BadSR: Stealthy Label Backdoor Attacks on Image Super-Resolution0
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?Code0
Towards Robust Spiking Neural Networks:Mitigating Heterogeneous Training Vulnerability via Dominant Eigencomponent Projection0
Sybil-based Virtual Data Poisoning Attacks in Federated Learning0
Stealthy LLM-Driven Data Poisoning Attacks Against Embedding-Based Retrieval-Augmented Recommender Systems0
Show:102550
← PrevPage 1 of 20Next →

No leaderboard results yet.