SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 351400 of 492 papers

TitleStatusHype
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario0
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks0
An Investigation of Data Poisoning Defenses for Online Learning0
An Optimal Control View of Adversarial Machine Learning0
A Novel Pearson Correlation-Based Merging Algorithm for Robust Distributed Machine Learning with Heterogeneous Data0
Approaching the Harm of Gradient Attacks While Only Flipping Labels0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
A Robust Attack: Displacement Backdoor Attack0
Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm0
Atlas: A Framework for ML Lifecycle Provenance & Transparency0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Attacks on the neural network and defense methods0
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Backdoor Attack and Defense for Deep Regression0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Certifiers Make Neural Networks Vulnerable to Availability Attacks0
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
Backdoor Vulnerabilities in Normally Trained Deep Learning Models0
BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning0
BadSR: Stealthy Label Backdoor Attacks on Image Super-Resolution0
Bait and Switch: Online Training Data Poisoning of Autonomous Driving Systems0
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation0
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems0
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps0
BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks0
Blockchain-based Federated Recommendation with Incentive Mechanism0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
BrainWash: A Poisoning Attack to Forget in Continual Learning0
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models0
Breaking Fair Binary Classification with Optimal Flipping Attacks0
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach0
Balancing Privacy, Robustness, and Efficiency in Machine Learning0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks0
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing0
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing0
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning0
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning0
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing0
Clean Label Attacks against SLU Systems0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Collaborative Self Organizing Map with DeepNNs for Fake Task Prevention in Mobile Crowdsensing0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Computation and Data Efficient Backdoor Attacks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Show:102550
← PrevPage 8 of 10Next →

No leaderboard results yet.