SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 181190 of 492 papers

TitleStatusHype
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution0
Concealed Data Poisoning Attacks on NLP Models0
Backdoor Attack and Defense for Deep Regression0
ControlNET: A Firewall for RAG-based LLM System0
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Computation and Data Efficient Backdoor Attacks0
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Active Learning Under Malicious Mislabeling and Poisoning Attacks0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
Show:102550
← PrevPage 19 of 50Next →

No leaderboard results yet.