SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 126150 of 492 papers

TitleStatusHype
Exploring Vulnerabilities and Protections in Large Language Models: A Survey0
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model DynamicsCode1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning0
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities0
Fed-Credit: Robust Federated Learning with Credibility Management0
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search0
On the Relevance of Byzantine Robust Optimization Against Data Poisoning0
Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning0
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods0
Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning0
Two Heads are Better than One: Nested PoE for Robust Defense Against Multi-BackdoorsCode0
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks0
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning0
Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence RatesCode0
Optimistic Verifiable Training by Controlling Hardware NondeterminismCode1
Don't Forget What I did?: Assessing Client Contributions in Federated Learning0
Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code0
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer NetworksCode0
Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models0
Learning to Poison Large Language Models for Downstream ManipulationCode1
Show:102550
← PrevPage 6 of 20Next →

No leaderboard results yet.