SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 351360 of 492 papers

TitleStatusHype
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
Learning and Unlearning of Fabricated Knowledge in Language Models0
Learning to Forget using Hypernetworks0
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning0
Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest0
Mendata: A Framework to Purify Manipulated Training Data0
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification0
Mitigating Data Poisoning in Text Classification with Differential Privacy0
Mitigating the Impact of Adversarial Attacks in Very Deep Networks0
Mixed Strategy Game Model Against Data Poisoning Attacks0
Show:102550
← PrevPage 36 of 50Next →

No leaderboard results yet.