SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 301350 of 492 papers

TitleStatusHype
Towards Poisoning Fair Representations0
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization0
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
Training set cleansing of backdoor poisoning by self-supervised representation learning0
Data Poisoning Attack Aiming the Vulnerability of Continual Learning0
Model-Agnostic Explanations using Minimal Forcing Subsets0
TrojanTime: Backdoor Attacks on Time Series Classification0
TrojFSP: Trojan Insertion in Few-shot Prompt Tuning0
Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems0
Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
Understanding Influence Functions and Datamodels via Harmonic Analysis0
Unlearnable Examples Detection via Iterative Filtering0
UTrace: Poisoning Forensics for Private Collaborative Learning0
VPN: Verification of Poisoning in Neural Networks0
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning0
Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning0
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications0
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0
Derivative-free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems0
Model Hijacking Attack in Federated Learning0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Towards Robust Spiking Neural Networks:Mitigating Heterogeneous Training Vulnerability via Dominant Eigencomponent Projection0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning0
ABC-FL: Anomalous and Benign client Classification in Federated Learning0
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers0
Active Learning Under Malicious Mislabeling and Poisoning Attacks0
Advancements in Recommender Systems: A Comprehensive Analysis Based on Data, Algorithms, and Evaluation0
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles0
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners0
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems0
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It0
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks0
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors0
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Adversarial Vulnerability of Active Transfer Learning0
A Framework of Randomized Selection Based Certified Defenses Against Data Poisoning Attacks0
A GAN-based data poisoning framework against anomaly detection in vertical federated learning0
A Geometric Approach to Problems in Optimization and Data Science0
A Gradient Method for Multilevel Optimization0
A Linear Approach to Data Poisoning0
A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters0
Show:102550
← PrevPage 7 of 10Next →

No leaderboard results yet.