SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 151200 of 492 papers

TitleStatusHype
Learning from Convolution-based Unlearnable DatasetsCode0
Odyssey: Creation, Analysis and Detection of Trojan ModelsCode0
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite AggregationCode0
Indiscriminate Data Poisoning Attacks on Neural NetworksCode0
Depth-2 Neural Networks Under a Data-Poisoning AttackCode0
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Defending Distributed Classifiers Against Data Poisoning AttacksCode0
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksCode0
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural NetworksCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion ModelsCode0
Game-Theoretic Unlearnable Example GeneratorCode0
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer NetworksCode0
Fooling Partial Dependence via Data PoisoningCode0
Detecting AI Trojans Using Meta Neural AnalysisCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
On Adversarial Bias and the Robustness of Fair Machine LearningCode0
Training-free Lexical Backdoor Attacks on Language ModelsCode0
Data Poisoning Attack against Unsupervised Node Embedding Methods0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Data Poisoning against Differentially-Private Learners: Attacks and Defenses0
Adversarial Vulnerability of Active Transfer Learning0
Data-Driven Control and Data-Poisoning attacks in Buildings: the KTH Live-In Lab case study0
Data-Dependent Stability Analysis of Adversarial Training0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution0
Concealed Data Poisoning Attacks on NLP Models0
Backdoor Attack and Defense for Deep Regression0
ControlNET: A Firewall for RAG-based LLM System0
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Computation and Data Efficient Backdoor Attacks0
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Active Learning Under Malicious Mislabeling and Poisoning Attacks0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
Collaborative Self Organizing Map with DeepNNs for Fake Task Prevention in Mobile Crowdsensing0
CLEAR: Clean-Up Sample-Targeted Backdoor in Neural Networks0
Attacks on the neural network and defense methods0
Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors0
Clean Label Attacks against SLU Systems0
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing0
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Show:102550
← PrevPage 4 of 10Next →

No leaderboard results yet.