SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 151160 of 492 papers

TitleStatusHype
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Excess Capacity and Backdoor PoisoningCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
DROP: Poison Dilution via Knowledge Distillation for Federated LearningCode0
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?Code0
Efficient Reward Poisoning Attacks on Online Deep Reinforcement LearningCode0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Defending Distributed Classifiers Against Data Poisoning AttacksCode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Detecting AI Trojans Using Meta Neural AnalysisCode0
Show:102550
← PrevPage 16 of 50Next →

No leaderboard results yet.