SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 321330 of 492 papers

TitleStatusHype
Testing the Robustness of Learned Index StructuresCode0
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
Efficient Reward Poisoning Attacks on Online Deep Reinforcement LearningCode0
BagFlip: A Certified Defense against Data PoisoningCode0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning0
Federated Multi-Armed Bandits Under Byzantine Attacks0
VPN: Verification of Poisoning in Neural Networks0
Show:102550
← PrevPage 33 of 50Next →

No leaderboard results yet.