SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 101150 of 492 papers

TitleStatusHype
Data Poisoning in LLMs: Jailbreak-Tuning and Scaling LawsCode3
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Model Hijacking Attack in Federated Learning0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor AttacksCode0
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Robust Yet Efficient Conformal Prediction SetsCode0
Advancements in Recommender Systems: A Comprehensive Analysis Based on Data, Algorithms, and Evaluation0
A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future TrendsCode4
Neuromimetic metaplasticity for adaptive continual learning0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning0
Securing Multi-turn Conversational Language Models From Distributed Backdoor TriggersCode0
On the Robustness of Graph Reduction Against GNN Backdoor0
Machine Unlearning Fails to Remove Data Poisoning AttacksCode0
BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning0
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural NetworksCode0
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition0
A Study of Backdoors in Instruction Fine-tuned Language Models0
Certified Robustness to Data Poisoning in Gradient-Based TrainingCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Exploring Vulnerabilities and Protections in Large Language Models: A Survey0
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model DynamicsCode1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning0
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities0
Fed-Credit: Robust Federated Learning with Credibility Management0
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search0
On the Relevance of Byzantine Robust Optimization Against Data Poisoning0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods0
Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning0
Two Heads are Better than One: Nested PoE for Robust Defense Against Multi-BackdoorsCode0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning0
Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence RatesCode0
Optimistic Verifiable Training by Controlling Hardware NondeterminismCode1
Don't Forget What I did?: Assessing Client Contributions in Federated Learning0
Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code0
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer NetworksCode0
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models0
Learning to Poison Large Language Models for Downstream ManipulationCode1
Show:102550
← PrevPage 3 of 10Next →

No leaderboard results yet.