SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 171180 of 492 papers

TitleStatusHype
Unlearnable Examples Detection via Iterative Filtering0
Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms0
2D-OOB: Attributing Data Contribution Through Joint Valuation FrameworkCode0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Model Hijacking Attack in Federated Learning0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor AttacksCode0
Show:102550
← PrevPage 18 of 50Next →

No leaderboard results yet.