SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 301325 of 492 papers

TitleStatusHype
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning0
Data Poisoning Attack Aiming the Vulnerability of Continual Learning0
Backdoor Vulnerabilities in Normally Trained Deep Learning Models0
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners0
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical SystemsCode0
FLock: Defending Malicious Behaviors in Federated Learning with Blockchain0
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems0
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification0
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario0
Training set cleansing of backdoor poisoning by self-supervised representation learning0
Detecting Backdoors in Deep Text Classifiers0
On Optimal Learning Under Targeted Data Poisoning0
Understanding Influence Functions and Datamodels via Harmonic Analysis0
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based ApproachCode0
Defend Data Poisoning Attacks on Voice Authentication0
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning0
Do-AIQ: A Design-of-Experiment Approach to Quality Evaluation of AI Mislabel Detection Algorithm0
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System0
Neural network fragile watermarking with no model performance degradation0
Lethal Dose Conjecture on Data PoisoningCode0
Testing the Robustness of Learned Index StructuresCode0
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain0
Backdoor Attack is a Devil in Federated GAN-based Medical Image SynthesisCode0
Efficient Reward Poisoning Attacks on Online Deep Reinforcement LearningCode0
Show:102550
← PrevPage 13 of 20Next →

No leaderboard results yet.