SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 201250 of 492 papers

TitleStatusHype
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Certifiers Make Neural Networks Vulnerable to Availability Attacks0
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
Backdoor Vulnerabilities in Normally Trained Deep Learning Models0
BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning0
BadSR: Stealthy Label Backdoor Attacks on Image Super-Resolution0
Bait and Switch: Online Training Data Poisoning of Autonomous Driving Systems0
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation0
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems0
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps0
BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks0
Blockchain-based Federated Recommendation with Incentive Mechanism0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
BrainWash: A Poisoning Attack to Forget in Continual Learning0
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models0
Breaking Fair Binary Classification with Optimal Flipping Attacks0
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach0
Balancing Privacy, Robustness, and Efficiency in Machine Learning0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks0
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing0
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing0
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning0
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning0
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing0
Clean Label Attacks against SLU Systems0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Collaborative Self Organizing Map with DeepNNs for Fake Task Prevention in Mobile Crowdsensing0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Computation and Data Efficient Backdoor Attacks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers0
ControlNET: A Firewall for RAG-based LLM System0
Concealed Data Poisoning Attacks on NLP Models0
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release0
PrivacyGAN: robust generative image privacy0
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models0
Property Inference From Poisoning0
Protecting against simultaneous data poisoning attacks0
Protecting Proprietary Data: Poisoning for Secure Dataset Release0
Provably effective detection of effective data poisoning attacks0
Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning0
Proving Data-Poisoning Robustness in Decision Trees0
Purifying Large Language Models by Ensembling a Small Language Model0
QTrojan: A Circuit Backdoor Against Quantum Neural Networks0
Show:102550
← PrevPage 5 of 10Next →

No leaderboard results yet.