SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 111120 of 492 papers

TitleStatusHype
Computation and Data Efficient Backdoor Attacks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks0
ControlNET: A Firewall for RAG-based LLM System0
Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm0
A Robust Attack: Displacement Backdoor Attack0
Concealed Data Poisoning Attacks on NLP Models0
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution0
CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation0
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers0
Show:102550
← PrevPage 12 of 50Next →

No leaderboard results yet.