SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 101150 of 492 papers

TitleStatusHype
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
Certified Robustness to Data Poisoning in Gradient-Based TrainingCode0
Poisoning Attack against Estimating from Pairwise ComparisonsCode0
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based ApproachCode0
Certified Defenses for Data Poisoning AttacksCode0
Naive Bayes Classifiers over Missing Data: Decision and PoisoningCode0
Poisoning Attacks with Generative Adversarial NetsCode0
Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
The Effect of Data Poisoning on Counterfactual ExplanationsCode0
Incompatibility Clustering as a Defense Against Backdoor Poisoning AttacksCode0
Multi-Faceted Studies on Data Poisoning can Advance LLM DevelopmentCode0
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch SamplingCode0
Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence RatesCode0
Machine Unlearning Fails to Remove Data Poisoning AttacksCode0
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Lethal Dose Conjecture on Data PoisoningCode0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Run-Off Election: Improved Provable Defense against Data Poisoning AttacksCode0
Learning from Convolution-based Unlearnable DatasetsCode0
Seeing is Not Believing: Camouflage Attacks on Image Scaling AlgorithmsCode0
Keeping up with dynamic attackers: Certifying robustness to adaptive online data poisoningCode0
Machine Learning Security against Data Poisoning: Are We There Yet?Code0
Odyssey: Creation, Analysis and Detection of Trojan ModelsCode0
An Equivalence Between Data Poisoning and Byzantine Gradient AttacksCode0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
On Adversarial Bias and the Robustness of Fair Machine LearningCode0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical SystemsCode0
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural NetworksCode0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Game-Theoretic Unlearnable Example GeneratorCode0
Fooling Partial Dependence via Data PoisoningCode0
2D-OOB: Attributing Data Contribution Through Joint Valuation FrameworkCode0
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer NetworksCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Depth-2 Neural Networks Under a Data-Poisoning AttackCode0
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Excess Capacity and Backdoor PoisoningCode0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG SignalsCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
DROP: Poison Dilution via Knowledge Distillation for Federated LearningCode0
BagFlip: A Certified Defense against Data PoisoningCode0
Efficient Reward Poisoning Attacks on Online Deep Reinforcement LearningCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
Delta-Influence: Unlearning Poisons via Influence FunctionsCode0
Show:102550
← PrevPage 3 of 10Next →

No leaderboard results yet.