SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 401450 of 492 papers

TitleStatusHype
Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning0
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning0
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
DROP: Poison Dilution via Knowledge Distillation for Federated LearningCode0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Depth-2 Neural Networks Under a Data-Poisoning AttackCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Subpopulation Data Poisoning AttacksCode0
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?Code0
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
An Equivalence Between Data Poisoning and Byzantine Gradient AttacksCode0
Game-Theoretic Unlearnable Example GeneratorCode0
Dimensionality reduction, regularization, and generalization in overparameterized regressionsCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
Training-free Lexical Backdoor Attacks on Language ModelsCode0
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural NetworksCode0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsCode0
Poisoning Attacks with Generative Adversarial NetsCode0
Defending Against Disinformation Attacks in Open-Domain Question AnsweringCode0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using JailbreakingCode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Detecting AI Trojans Using Meta Neural AnalysisCode0
Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender SystemsCode0
Robust Yet Efficient Conformal Prediction SetsCode0
PORE: Provably Robust Recommender Systems against Data Poisoning AttacksCode0
Fooling Partial Dependence via Data PoisoningCode0
Keeping up with dynamic attackers: Certifying robustness to adaptive online data poisoningCode0
Run-Off Election: Improved Provable Defense against Data Poisoning AttacksCode0
Trainwreck: A damaging adversarial attack on image classifiersCode0
Transferable Availability Poisoning AttacksCode0
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data PoisoningCode0
Targeted Backdoor Attacks on Deep Learning Systems Using Data PoisoningCode0
Learning from Convolution-based Unlearnable DatasetsCode0
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer NetworksCode0
Securing Multi-turn Conversational Language Models From Distributed Backdoor TriggersCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
Lethal Dose Conjecture on Data PoisoningCode0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe NoiseCode0
Machine Learning Security against Data Poisoning: Are We There Yet?Code0
Machine Unlearning Fails to Remove Data Poisoning AttacksCode0
Attacking Black-box Recommendations via Copying Cross-domain User ProfilesCode0
Progressive Poisoned Data Isolation for Training-time Backdoor DefenseCode0
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Show:102550
← PrevPage 9 of 10Next →

No leaderboard results yet.