SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 101125 of 492 papers

TitleStatusHype
Certified Robustness to Data Poisoning in Gradient-Based TrainingCode0
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Attacking Black-box Recommendations via Copying Cross-domain User ProfilesCode0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
Certified Defenses for Data Poisoning AttacksCode0
Naive Bayes Classifiers over Missing Data: Decision and PoisoningCode0
Keeping up with dynamic attackers: Certifying robustness to adaptive online data poisoningCode0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Multi-Faceted Studies on Data Poisoning can Advance LLM DevelopmentCode0
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor AttacksCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Depth-2 Neural Networks Under a Data-Poisoning AttackCode0
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural NetworksCode0
Odyssey: Creation, Analysis and Detection of Trojan ModelsCode0
Game-Theoretic Unlearnable Example GeneratorCode0
Fooling Partial Dependence via Data PoisoningCode0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer NetworksCode0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
Poisoning Attacks with Generative Adversarial NetsCode0
Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG SignalsCode0
An Equivalence Between Data Poisoning and Byzantine Gradient AttacksCode0
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Show:102550
← PrevPage 5 of 20Next →

No leaderboard results yet.