SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 391400 of 492 papers

TitleStatusHype
Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors0
Generating Fake Cyber Threat Intelligence Using Transformer-Based Models0
Property Inference From Poisoning0
Adversarial Vulnerability of Active Transfer Learning0
Data Poisoning Attacks to Deep Learning Based Recommender Systems0
Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Sself: Robust Federated Learning against Stragglers and Adversaries0
Active Learning Under Malicious Mislabeling and Poisoning Attacks0
Federated Unlearning0
Show:102550
← PrevPage 40 of 50Next →

No leaderboard results yet.