SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 426450 of 492 papers

TitleStatusHype
Data Shifts Hurt CoT: A Theoretical Study0
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning0
Deep Learning Model Security: Threats and Defenses0
Deep Probabilistic Models to Detect Data Poisoning Attacks0
Defend Data Poisoning Attacks on Voice Authentication0
Defending against Backdoor Attack on Deep Neural Networks0
Defending Against Backdoor Attacks Using Ensembles of Weak Learners0
Defending Backdoor Data Poisoning Attacks by Using Noisy Label Defense Algorithm0
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks0
Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks0
Detecting Backdoors in Deep Text Classifiers0
Detection of Physiological Data Tampering Attacks with Quantum Machine Learning0
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation0
Do-AIQ: A Design-of-Experiment Approach to Quality Evaluation of AI Mislabel Detection Algorithm0
Don't Forget What I did?: Assessing Client Contributions in Federated Learning0
DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers0
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models0
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective0
Show:102550
← PrevPage 18 of 20Next →

No leaderboard results yet.