SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 151175 of 492 papers

TitleStatusHype
Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG SignalsCode0
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Excess Capacity and Backdoor PoisoningCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Defending Distributed Classifiers Against Data Poisoning AttacksCode0
DROP: Poison Dilution via Knowledge Distillation for Federated LearningCode0
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?Code0
Machine Learning Security against Data Poisoning: Are We There Yet?Code0
Delta-Influence: Unlearning Poisons via Influence FunctionsCode0
Efficient Reward Poisoning Attacks on Online Deep Reinforcement LearningCode0
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch SamplingCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Detecting AI Trojans Using Meta Neural AnalysisCode0
Lethal Dose Conjecture on Data PoisoningCode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Run-Off Election: Improved Provable Defense against Data Poisoning AttacksCode0
Data Poisoning Attack against Unsupervised Node Embedding Methods0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Data Poisoning against Differentially-Private Learners: Attacks and Defenses0
Adversarial Vulnerability of Active Transfer Learning0
Data-Driven Control and Data-Poisoning attacks in Buildings: the KTH Live-In Lab case study0
Data-Dependent Stability Analysis of Adversarial Training0
Show:102550
← PrevPage 7 of 20Next →

No leaderboard results yet.