SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 101150 of 492 papers

TitleStatusHype
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Towards Robust Spiking Neural Networks:Mitigating Heterogeneous Training Vulnerability via Dominant Eigencomponent Projection0
Model Hijacking Attack in Federated Learning0
Atlas: A Framework for ML Lifecycle Provenance & Transparency0
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing0
Clean Label Attacks against SLU Systems0
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Computation and Data Efficient Backdoor Attacks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing0
ControlNET: A Firewall for RAG-based LLM System0
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks0
Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy0
Concealed Data Poisoning Attacks on NLP Models0
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution0
CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation0
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
Data-Dependent Stability Analysis of Adversarial Training0
Data-Driven Control and Data-Poisoning attacks in Buildings: the KTH Live-In Lab case study0
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks0
Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm0
A Robust Attack: Displacement Backdoor Attack0
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks0
Balancing Privacy, Robustness, and Efficiency in Machine Learning0
Approaching the Harm of Gradient Attacks While Only Flipping Labels0
Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It0
Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach0
Breaking Fair Binary Classification with Optimal Flipping Attacks0
A Novel Pearson Correlation-Based Merging Algorithm for Robust Distributed Machine Learning with Heterogeneous Data0
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models0
BrainWash: A Poisoning Attack to Forget in Continual Learning0
An Optimal Control View of Adversarial Machine Learning0
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
ABC-FL: Anomalous and Benign client Classification in Federated Learning0
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
An Investigation of Data Poisoning Defenses for Online Learning0
Show:102550
← PrevPage 3 of 10Next →

No leaderboard results yet.