SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 1120 of 492 papers

TitleStatusHype
Data Poisoning in Deep Learning: A SurveyCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
PoisonBench: Assessing Large Language Model Vulnerability to Data PoisoningCode1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model DynamicsCode1
Optimistic Verifiable Training by Controlling Hardware NondeterminismCode1
Learning to Poison Large Language Models for Downstream ManipulationCode1
Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Show:102550
← PrevPage 2 of 50Next →

No leaderboard results yet.