SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 110 of 492 papers

TitleStatusHype
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks0
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning0
Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs0
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual LearningCode0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
Data Shifts Hurt CoT: A Theoretical Study0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Show:102550
← PrevPage 1 of 50Next →

No leaderboard results yet.