SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 251260 of 492 papers

TitleStatusHype
Unlearnable Clusters: Towards Label-agnostic Unlearnable ExamplesCode1
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning AttacksCode1
Defending Against Disinformation Attacks in Open-Domain Question AnsweringCode0
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning0
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning0
Backdoor Vulnerabilities in Normally Trained Deep Learning Models0
Data Poisoning Attack Aiming the Vulnerability of Continual Learning0
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners0
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical SystemsCode0
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Show:102550
← PrevPage 26 of 50Next →

No leaderboard results yet.