SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 451492 of 492 papers

TitleStatusHype
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
2D-OOB: Attributing Data Contribution Through Joint Valuation FrameworkCode0
TrojDRL: Trojan Attacks on Deep Reinforcement Learning AgentsCode0
Testing the Robustness of Learned Index StructuresCode0
Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained ModelsCode0
Incompatibility Clustering as a Defense Against Backdoor Poisoning AttacksCode0
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor AttacksCode0
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch SamplingCode0
Multi-Faceted Studies on Data Poisoning can Advance LLM DevelopmentCode0
Seeing is Not Believing: Camouflage Attacks on Image Scaling AlgorithmsCode0
BagFlip: A Certified Defense against Data PoisoningCode0
The Effect of Data Poisoning on Counterfactual ExplanationsCode0
Delta-Influence: Unlearning Poisons via Influence FunctionsCode0
Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence RatesCode0
Classification Auto-Encoder based Detector against Diverse Data Poisoning AttacksCode0
Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG SignalsCode0
Odyssey: Creation, Analysis and Detection of Trojan ModelsCode0
Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
On Adversarial Bias and the Robustness of Fair Machine LearningCode0
Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoningCode0
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?Code0
Defending Regression Learners Against Poisoning AttacksCode0
Defending Distributed Classifiers Against Data Poisoning AttacksCode0
VenoMave: Targeted Poisoning Against Speech RecognitionCode0
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical SystemsCode0
Certified Robustness to Data Poisoning in Gradient-Based TrainingCode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
Excess Capacity and Backdoor PoisoningCode0
Adversarial Robustness of Deep Learning Models for Inland Water Body Segmentation from SAR ImagesCode0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Two Heads are Better than One: Nested PoE for Robust Defense Against Multi-BackdoorsCode0
Certified Defenses for Data Poisoning AttacksCode0
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based ApproachCode0
Addressing The Devastating Effects Of Single-Task Data Poisoning In Exemplar-Free Continual LearningCode0
Spectral Signatures in Backdoor AttacksCode0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
Efficient Reward Poisoning Attacks on Online Deep Reinforcement LearningCode0
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of RobustnessCode0
Understanding the Limits of Unsupervised Domain Adaptation via Data PoisoningCode0
Universal Backdoor AttacksCode0
Deep k-NN Defense against Clean-label Data Poisoning AttacksCode0
Naive Bayes Classifiers over Missing Data: Decision and PoisoningCode0
Show:102550
← PrevPage 10 of 10Next →

No leaderboard results yet.