SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 401425 of 492 papers

TitleStatusHype
Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning0
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning0
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
DROP: Poison Dilution via Knowledge Distillation for Federated LearningCode0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Depth-2 Neural Networks Under a Data-Poisoning AttackCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Subpopulation Data Poisoning AttacksCode0
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?Code0
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
An Equivalence Between Data Poisoning and Byzantine Gradient AttacksCode0
Game-Theoretic Unlearnable Example GeneratorCode0
Dimensionality reduction, regularization, and generalization in overparameterized regressionsCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
Training-free Lexical Backdoor Attacks on Language ModelsCode0
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural NetworksCode0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsCode0
Poisoning Attacks with Generative Adversarial NetsCode0
Defending Against Disinformation Attacks in Open-Domain Question AnsweringCode0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Show:102550
← PrevPage 17 of 20Next →

No leaderboard results yet.