SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 3140 of 492 papers

TitleStatusHype
Poisoning Web-Scale Training Datasets is PracticalCode1
TrojanPuzzle: Covertly Poisoning Code-Suggestion ModelsCode1
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
Unlearnable Clusters: Towards Label-agnostic Unlearnable ExamplesCode1
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning AttacksCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Generative Poisoning Using Random DiscriminatorsCode1
Amplifying Membership Exposure via Data PoisoningCode1
Not All Poisons are Created Equal: Robust Training against Data PoisoningCode1
Show:102550
← PrevPage 4 of 50Next →

No leaderboard results yet.