SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 171180 of 492 papers

TitleStatusHype
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks0
FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against Adversarial Attacks0
Mendata: A Framework to Purify Manipulated Training Data0
Universal Backdoor AttacksCode0
IMMA: Immunizing text-to-image Models against Malicious AdaptationCode1
Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective0
Trainwreck: A damaging adversarial attack on image classifiersCode0
Security and Privacy Challenges in Deep Learning Models0
Show:102550
← PrevPage 18 of 50Next →

No leaderboard results yet.