SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 101110 of 492 papers

TitleStatusHype
Data Poisoning in LLMs: Jailbreak-Tuning and Scaling LawsCode3
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Model Hijacking Attack in Federated Learning0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor AttacksCode0
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Show:102550
← PrevPage 11 of 50Next →

No leaderboard results yet.