SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 401450 of 492 papers

TitleStatusHype
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers0
ControlNET: A Firewall for RAG-based LLM System0
Concealed Data Poisoning Attacks on NLP Models0
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution0
CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation0
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
Data-Dependent Stability Analysis of Adversarial Training0
Data-Driven Control and Data-Poisoning attacks in Buildings: the KTH Live-In Lab case study0
Data Poisoning against Differentially-Private Learners: Attacks and Defenses0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Data Poisoning Attack against Unsupervised Node Embedding Methods0
Data Poisoning Attacks against Online Learning0
Data Poisoning Attacks and Defenses to Crowdsourcing Systems0
Data Poisoning Attacks in Contextual Bandits0
Data Poisoning Attacks on EEG Signal-based Risk Assessment Systems0
Data Poisoning Attacks on Factorization-Based Collaborative Filtering0
Data Poisoning Attacks on Federated Machine Learning0
Data Poisoning Attacks on Neighborhood-based Recommender Systems0
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods0
Data Poisoning Attacks on Stochastic Bandits0
Data Poisoning Attacks to Deep Learning Based Recommender Systems0
Data Poisoning Attacks to Locally Differentially Private Range Query Protocols0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
Data Poisoning Won’t Save You From Facial Recognition0
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses0
Data Shifts Hurt CoT: A Theoretical Study0
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning0
Deep Learning Model Security: Threats and Defenses0
Deep Probabilistic Models to Detect Data Poisoning Attacks0
Defend Data Poisoning Attacks on Voice Authentication0
Defending against Backdoor Attack on Deep Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks0
Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
Detecting Backdoors in Deep Text Classifiers0
Detection of Physiological Data Tampering Attacks with Quantum Machine Learning0
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation0
Do-AIQ: A Design-of-Experiment Approach to Quality Evaluation of AI Mislabel Detection Algorithm0
Don't Forget What I did?: Assessing Client Contributions in Federated Learning0
DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers0
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models0
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective0
Show:102550
← PrevPage 9 of 10Next →

No leaderboard results yet.