SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 371380 of 492 papers

TitleStatusHype
Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification0
Reinforcement Learning For Data Poisoning on Graph Neural Networks0
Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors0
Generating Fake Cyber Threat Intelligence Using Transformer-Based Models0
Property Inference From Poisoning0
Adversarial Vulnerability of Active Transfer Learning0
Data Poisoning Attacks to Deep Learning Based Recommender Systems0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Active Learning Under Malicious Mislabeling and Poisoning Attacks0
Sself: Robust Federated Learning against Stragglers and Adversaries0
Show:102550
← PrevPage 38 of 50Next →

No leaderboard results yet.