SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 2130 of 492 papers

TitleStatusHype
IMMA: Immunizing text-to-image Models against Malicious AdaptationCode1
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning AttacksCode1
FedDefender: Backdoor Attack Defense in Federated LearningCode1
On the Exploitability of Instruction TuningCode1
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example AttacksCode1
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Show:102550
← PrevPage 3 of 50Next →

No leaderboard results yet.