SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 6170 of 492 papers

TitleStatusHype
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Backdoor Attacks on Crowd CountingCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Generative Poisoning Using Random DiscriminatorsCode1
Availability Attacks Create ShortcutsCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
Learning to Poison Large Language Models for Downstream ManipulationCode1
Show:102550
← PrevPage 7 of 50Next →

No leaderboard results yet.