SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 141150 of 492 papers

TitleStatusHype
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG SignalsCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
DROP: Poison Dilution via Knowledge Distillation for Federated LearningCode0
BagFlip: A Certified Defense against Data PoisoningCode0
Efficient Reward Poisoning Attacks on Online Deep Reinforcement LearningCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
Delta-Influence: Unlearning Poisons via Influence FunctionsCode0
Show:102550
← PrevPage 15 of 50Next →

No leaderboard results yet.